WO2024204585A1 - Recognition device, moving body control device, recognition method, and program - Google Patents
Recognition device, moving body control device, recognition method, and program Download PDFInfo
- Publication number
- WO2024204585A1 WO2024204585A1 PCT/JP2024/012743 JP2024012743W WO2024204585A1 WO 2024204585 A1 WO2024204585 A1 WO 2024204585A1 JP 2024012743 W JP2024012743 W JP 2024012743W WO 2024204585 A1 WO2024204585 A1 WO 2024204585A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- recognition unit
- recognition
- output
- unit
- velocity
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 36
- 230000008569 process Effects 0.000 claims abstract description 22
- 238000001514 detection method Methods 0.000 claims abstract description 12
- 238000012545 processing Methods 0.000 claims description 77
- 230000004927 fusion Effects 0.000 claims description 23
- 230000006870 function Effects 0.000 description 22
- 238000010586 diagram Methods 0.000 description 21
- 238000004891 communication Methods 0.000 description 10
- 230000006399 behavior Effects 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 7
- 230000001133 acceleration Effects 0.000 description 6
- 230000009471 action Effects 0.000 description 5
- 230000008859 change Effects 0.000 description 5
- 239000011159 matrix material Substances 0.000 description 5
- 238000013459 approach Methods 0.000 description 3
- 238000002485 combustion reaction Methods 0.000 description 3
- 238000005070 sampling Methods 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000005484 gravity Effects 0.000 description 2
- 230000012447 hatching Effects 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 230000001052 transient effect Effects 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000003542 behavioural effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000036461 convulsion Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 239000000446 fuel Substances 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W30/00—Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
- B60W30/08—Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
Definitions
- the present invention relates to a recognition device, a control device for a moving object, a recognition method, and a program.
- surrounding recognition technology In recent years, there has been much research and practical application into autonomously moving vehicles and other moving bodies. In most cases, this technology requires technology to recognize objects that exist around the moving body (hereinafter, surrounding recognition technology).
- One invention relating to surrounding recognition technology is disclosed in which, based on driving scene data and driving behavior data in the driving scene in which the vehicle is currently located, it is determined whether or not a certain stable driving condition is met, and if the certain stable driving condition is met, driving scene data and driving behavior data are collected at a frequency lower than a certain sampling frequency, thereby reducing data redundancy in similar scenes and similar driving modes (Patent Document 1).
- the present invention was made in consideration of these circumstances, and one of its objectives is to provide a recognition device, a control device for a moving object, a recognition method, and a program that can quickly track changes in the speed of an object.
- a recognition device is a recognition device that recognizes a position and speed of an object present in the vicinity of a moving body based on the output of a detection device for detecting the surrounding conditions of the moving body, and includes a first recognition unit that repeatedly outputs the position of the object, which is a result of processing for recognizing the position of the object, in a first period, and a second recognition unit that repeatedly outputs the speed of the object, which is a result of processing for recognizing the speed of the object, in a second period that is shorter than the first period.
- the second recognition unit has a function of provisionally recognizing the position of an object whose position has not been output by the first recognition unit, and when a position is output from the first recognition unit and a velocity is output from the second recognition unit, the second recognition unit outputs the position output by the first recognition unit and the velocity output by the second recognition unit as the state of the object, and further includes a fusion unit that outputs the position and velocity output by the second recognition unit as the state of the object for an object whose position has not been output by the first recognition unit and whose position and velocity have been output by the second recognition unit.
- the second recognition unit has a function of recognizing the presence of an object whose position has not been output by the first recognition unit.
- the first recognition unit and the second recognition unit each perform processing by executing a processing procedure that is at least partially common to both units.
- each of the first recognition unit and the second recognition unit is realized by one or more processors performing the processing as the first recognition unit and the processing as the second recognition unit in a time-division manner.
- the first recognition unit and the second recognition unit are each realized by a separate processor performing processing.
- the first recognition unit has a function of repeatedly outputting the position and velocity of the object in the first period
- the second recognition unit has a function of repeatedly outputting the position and velocity of the object in the second period
- the fusion unit outputs the position and velocity output by the first recognition unit as the state of the object when the first recognition unit outputs for the second or subsequent time.
- the first recognition unit has a function of repeatedly outputting the position and velocity of the object in the first period, and further includes a fusion unit that outputs the position and velocity output by the first recognition unit as the state of the object at the timing when the first recognition unit outputs the position and velocity.
- the first recognition unit has a function of repeatedly outputting the position and velocity of the object in the first period
- the second recognition unit has a function of repeatedly outputting the position and velocity of the object in the second period
- further includes a fusion unit that, at the timing when the first recognition unit performs an output for the first time, outputs as the state of the object the position output by the first recognition unit and the velocity output by the first recognition unit based on the position output by the first recognition unit and the position output by the second recognition unit at the previous timing before the timing.
- the second recognition unit updates the first position by performing linear interpolation based on historical information of the positions and velocities of the object that have been recognized in the past.
- a control device for a moving body includes the recognition device according to aspect (1) above, and a driving control unit that moves the moving body so as to avoid approaching an object whose state has been output by the recognition device.
- a recognition method is a recognition method executed by a recognition device that recognizes the position and speed of an object present around a moving body based on the output of a detection device for detecting the surrounding conditions of the moving body, and includes repeatedly outputting the position of the object, which is a result of a process for recognizing the position of the object, in a first period, and repeatedly outputting the speed of the object, which is a result of a process for recognizing the speed of the object, in a second period shorter than the first period.
- a program according to another aspect of the present invention is a program for causing a processor of a recognition device that recognizes the position and speed of an object present around a moving body based on the output of a detection device for detecting the surrounding conditions of the moving body to repeatedly output the position of the object, which is a result of processing for recognizing the position of the object, in a first period, and repeatedly output the speed of the object, which is a result of processing for recognizing the speed of the object, in a second period shorter than the first period.
- FIG. 1 is a configuration diagram of a vehicle system 1 that uses a recognition device and a control device for a moving object according to a first embodiment.
- FIG. 2 is a functional configuration diagram of a first control unit and a second control unit.
- FIG. 2 illustrates an example of a configuration of a recognition unit.
- FIG. 4 is a diagram illustrating an example of a relationship between a first recognition unit and a second recognition unit.
- FIG. 11 is a diagram illustrating an example of the operation of a first recognition unit and a second recognition unit in a certain scene. 11 is a diagram for explaining the processing of a future position prediction/risk setting unit.
- FIG. 11 is a diagram illustrating an example of the operation of a first recognition unit and a second recognition unit in a certain scene according to the second embodiment.
- FIG. 13 is a diagram illustrating an example of the operation of a first recognition unit and a second recognition unit in a certain scene according to the third embodiment.
- FIG. 13 is a diagram illustrating an example of the operation of a first recognition unit and a second recognition unit in a certain scene according to the fourth embodiment.
- the recognition device recognizes at least the position and speed of an object present around the moving body.
- the recognition device is mounted on the moving body, for example, but may be installed outside the moving body.
- the control device for the moving body controls the drive device of the moving body to move the moving body.
- the moving body refers to any object that can move in space using mechanical power, such as a four-wheeled or two-wheeled vehicle, micromobility, or a drone, aircraft, or ship.
- the moving body may move with a person or animal on board, or may be an unmanned moving body. In the following description, the moving body is assumed to be a vehicle that moves on the road with a person on board, and an "automatic driving control device" is taken as an example of a control device.
- First Embodiment [Overall configuration] 1 is a configuration diagram of a vehicle system 1 that uses a recognition device and a mobile object control device according to the first embodiment.
- the vehicle on which the vehicle system 1 is mounted is, for example, a two-wheeled, three-wheeled, or four-wheeled vehicle, and its drive source is an internal combustion engine such as a diesel engine or a gasoline engine, an electric motor, or a combination of these.
- the electric motor operates using power generated by a generator connected to the internal combustion engine, or discharged power from a secondary battery or a fuel cell.
- the vehicle system 1 includes, for example, a camera 10, a radar device 12, a LIDAR (Light Detection and Ranging) 14, an object recognition device 16, a communication device 20, an HMI (Human Machine Interface) 30, a vehicle sensor 40, a navigation device 50, an MPU (Map Positioning Unit) 60, a driving operator 80, an automatic driving control device 100, a driving force output device 200, a braking device 210, and a steering device 220.
- These devices and equipment are connected to each other by multiple communication lines such as a CAN (Controller Area Network) communication line, serial communication lines, a wireless communication network, etc.
- CAN Controller Area Network
- the camera 10 is, for example, a digital camera that uses a solid-state imaging element such as a CCD (Charge Coupled Device) or a CMOS (Complementary Metal Oxide Semiconductor).
- the camera 10 is attached at any location of the vehicle (hereinafter, vehicle M) in which the vehicle system 1 is mounted.
- vehicle M vehicle
- the camera 10 is attached to the top of the front windshield, the back of the rearview mirror, or the like.
- the camera 10 for example, periodically and repeatedly captures images of the surroundings of the vehicle M.
- the camera 10 may be a stereo camera.
- the radar device 12 emits radio waves such as millimeter waves around the vehicle M and detects radio waves reflected by objects (reflected waves) to detect at least the position (distance and direction) of the object.
- the radar device 12 is attached to any location on the vehicle M.
- the radar device 12 may detect the position and speed of an object using the FM-CW (Frequency Modulated Continuous Wave) method.
- FM-CW Frequency Modulated Continuous Wave
- the LIDAR 14 irradiates light (or electromagnetic waves with a wavelength close to that of light) around the vehicle M and measures the scattered light.
- the LIDAR 14 detects the distance to the target based on the time between emitting and receiving the light.
- the irradiated light is, for example, a pulsed laser light.
- the LIDAR 14 can be attached to any location on the vehicle M.
- the object recognition device 16 performs sensor fusion processing on the detection results from some or all of the camera 10, radar device 12, and LIDAR 14 to recognize the position, type, speed, etc. of the object.
- the object recognition device 16 outputs the recognition results to the autonomous driving control device 100.
- the object recognition device 16 may output the detection results from the camera 10, radar device 12, and LIDAR 14 directly to the autonomous driving control device 100.
- the object recognition device 16 may be omitted from the vehicle system 1.
- the communication device 20 communicates with other vehicles in the vicinity of the vehicle M using, for example, a cellular network, a Wi-Fi network, Bluetooth (registered trademark), or DSRC (Dedicated Short Range Communication), or communicates with various server devices via a wireless base station.
- a cellular network for example, a Wi-Fi network, Bluetooth (registered trademark), or DSRC (Dedicated Short Range Communication), or communicates with various server devices via a wireless base station.
- the HMI 30 presents various information to the occupants of the vehicle M and accepts input operations by the occupants.
- the HMI 30 includes various display devices, speakers, buzzers, touch panels, switches, keys, etc.
- the vehicle sensor 40 includes a vehicle speed sensor that detects the speed of the vehicle M, an acceleration sensor that detects the acceleration, a yaw rate sensor that detects the angular velocity around the vertical axis, a direction sensor that detects the direction of the vehicle M, etc.
- the navigation device 50 includes, for example, a GNSS (Global Navigation Satellite System) receiver 51, a navigation HMI 52, and a route determination unit 53.
- the navigation device 50 stores first map information 54 in a storage device such as a HDD (Hard Disk Drive) or flash memory.
- the GNSS receiver 51 determines the position of the vehicle M based on signals received from GNSS satellites. The position of the vehicle M may be determined or supplemented by an INS (Inertial Navigation System) that uses the output of the vehicle sensor 40.
- the navigation HMI 52 includes a display device, a speaker, a touch panel, keys, etc. The navigation HMI 52 may be partially or completely shared with the HMI 30 described above.
- the route determination unit 53 determines a route (hereinafter, a route on a map) from the position of the vehicle M specified by the GNSS receiver 51 (or any input position) to a destination input by the occupant using the navigation HMI 52, for example, by referring to the first map information 54.
- the first map information 54 is, for example, information in which a road shape is expressed by links indicating roads and nodes connected by the links.
- the first map information 54 may include road curvature and POI (Point of Interest) information.
- the route on the map is output to the MPU 60.
- the navigation device 50 may perform route guidance using the navigation HMI 52 based on the route on the map.
- the navigation device 50 may be realized by the function of a terminal device such as a smartphone or tablet terminal owned by the occupant.
- the navigation device 50 may transmit the current position and the destination to a navigation server via the communication device 20, and obtain a route equivalent to the route on the map from the navigation server.
- the MPU 60 includes, for example, a recommended lane determination unit 61, and stores second map information 62 in a storage device such as an HDD or flash memory.
- the recommended lane determination unit 61 divides the route on the map provided by the navigation device 50 into a number of blocks (for example, every 100 m in the vehicle travel direction), and determines a recommended lane for each block by referring to the second map information 62.
- the recommended lane determination unit 61 determines, for example, which lane from the left to use. When there is a branch on the route on the map, the recommended lane determination unit 61 determines a recommended lane so that the vehicle M can use a reasonable route to proceed to the branch destination.
- the second map information 62 is map information with higher accuracy than the first map information 54.
- the second map information 62 includes, for example, information on the center of lanes or information on lane boundaries.
- the second map information 62 may also include road information, traffic regulation information, address information (address and postal code), facility information, telephone number information, information on prohibited sections where mode A or mode B described below is prohibited, and the like.
- the second map information 62 may be updated at any time by the communication device 20 communicating with other devices.
- the driving operators 80 include, for example, a steering wheel, an accelerator pedal, a brake pedal, a shift lever, and other operators.
- the driving operators 80 are fitted with sensors that detect the amount of operation or the presence or absence of operation, and the detection results are output to the automatic driving control device 100, or some or all of the driving force output device 200, the brake device 210, and the steering device 220.
- the automatic driving control device 100 includes, for example, a first control unit 120 and a second control unit 160.
- the first control unit 120 and the second control unit 160 are each realized by, for example, a hardware processor such as a CPU (Central Processing Unit) executing a program (software).
- a hardware processor such as a CPU (Central Processing Unit) executing a program (software).
- some or all of these components may be realized by hardware (including circuitry) such as an LSI (Large Scale Integration), ASIC (Application Specific Integrated Circuit), FPGA (Field-Programmable Gate Array), or GPU (Graphics Processing Unit), or may be realized by collaboration between software and hardware.
- LSI Large Scale Integration
- ASIC Application Specific Integrated Circuit
- FPGA Field-Programmable Gate Array
- GPU Graphics Processing Unit
- the program may be stored in advance in a storage device (a storage device with a non-transient storage medium) such as the HDD or flash memory of the autonomous driving control device 100, or may be stored in a removable storage medium such as a DVD or CD-ROM, and installed in the HDD or flash memory of the autonomous driving control device 100 by mounting the storage medium (non-transient storage medium) in a drive device.
- a storage device a storage device with a non-transient storage medium
- a storage device such as the HDD or flash memory of the autonomous driving control device 100
- a removable storage medium such as a DVD or CD-ROM
- the autonomous driving control device 100 is an example of a "control device for a moving body"
- the recognition unit 130 is an example of a “recognition device”
- the combination of the action plan generation unit 140 and the second control unit 160 is an example of a "driving control unit”.
- FIG. 2 is a functional configuration diagram of the first control unit 120 and the second control unit 160.
- the first control unit 120 includes, for example, a recognition unit 130 and an action plan generation unit 140.
- the first control unit 120 realizes, for example, a function based on AI (Artificial Intelligence) and a function based on a pre-given model in parallel.
- AI Artificial Intelligence
- the "intersection recognition" function may be realized by executing in parallel the recognition of the intersection using deep learning or the like and the recognition based on pre-given conditions (such as traffic lights and road markings that can be pattern matched), and by scoring both and evaluating them comprehensively. This ensures the reliability of autonomous driving.
- the recognition unit 130 recognizes the presence, position, and state of the speed, acceleration, etc. of objects around the vehicle M based on information input from the camera 10, the radar device 12, and the LIDAR 14 via the object recognition device 16.
- the position of the object is recognized as a position on an absolute coordinate system with a representative point of the vehicle M (such as the center of gravity or the center of the drive shaft) as the origin, and is used for control.
- the position of the object may be represented by a representative point such as the center of gravity or a corner of the object, or may be represented by an area.
- the "state" of the object may include the acceleration or jerk of the object, or the "behavioral state” (for example, whether or not the object is changing lanes or is about to change lanes).
- the recognition unit 130 also recognizes, for example, the lane in which the vehicle M is traveling (the driving lane). For example, the recognition unit 130 recognizes the driving lane by comparing the pattern of road dividing lines (for example, an arrangement of solid and dashed lines) obtained from the second map information 62 with the pattern of road dividing lines around the vehicle M recognized from the image captured by the camera 10. Note that the recognition unit 130 may recognize the driving lane by recognizing road boundaries (road boundaries) including not only road dividing lines but also road dividing lines, shoulders, curbs, medians, guard rails, etc. In this recognition, the position of the vehicle M obtained from the navigation device 50 and the processing results by the INS may be taken into account. The recognition unit 130 also recognizes stop lines, obstacles, red lights, toll booths, and other road phenomena.
- road boundaries for example, an arrangement of solid and dashed lines
- the recognition unit 130 recognizes the position and attitude of the vehicle M with respect to the driving lane. For example, the recognition unit 130 may recognize the deviation of the reference point of the vehicle M from the center of the lane and the angle with respect to a line connecting the centers of the lanes in the direction of travel of the vehicle M as the relative position and attitude of the vehicle M with respect to the driving lane. Alternatively, the recognition unit 130 may recognize the position of the reference point of the vehicle M with respect to either side end of the driving lane (road division line or road boundary) as the relative position of the vehicle M with respect to the driving lane.
- the behavior plan generating unit 140 generates a target trajectory for the vehicle M to automatically (without the driver's operation) travel in the future so that, in principle, the vehicle travels in the recommended lane determined by the recommended lane determining unit 61 and avoids approaching objects (excluding road division lines, road markings, manholes, and other objects that can be climbed over) recognized by the recognition unit 130.
- the recognition unit 130 sets a risk area centered on the object whose state has been output, and within the risk area, the recognition unit 130 sets a risk as an index value indicating the degree to which the vehicle M should not approach.
- the behavior plan generating unit 140 generates a target trajectory so that the vehicle M does not pass through a point where the risk is equal to or greater than a predetermined value.
- the target trajectory includes, for example, a speed element.
- the target trajectory is expressed as a sequence of points (trajectory points) to be reached by the vehicle M.
- a trajectory point is a point where the vehicle M should reach at each predetermined travel distance (e.g., about several meters) along a road, and separately, a target speed and target acceleration are generated as part of the target trajectory for each predetermined sampling time (e.g., about a few tenths of a second).
- a trajectory point may also be a position where the vehicle M should reach at each predetermined sampling time.
- the information on the target speed and target acceleration is expressed as the interval between trajectory points.
- the second control unit 160 controls the driving force output device 200, the brake device 210, and the steering device 220 so that the vehicle M passes through the target trajectory generated by the action plan generation unit 140 at the scheduled time.
- the second control unit 160 includes, for example, an acquisition unit 162, a speed control unit 164, and a steering control unit 166.
- the acquisition unit 162 acquires information on the target trajectory (trajectory points) generated by the action plan generation unit 140, and stores it in a memory (not shown).
- the speed control unit 164 controls the driving force output device 200 or the brake device 210 based on the speed element associated with the target trajectory stored in the memory.
- the steering control unit 166 controls the steering device 220 according to the curvature of the target trajectory stored in the memory.
- the processing of the speed control unit 164 and the steering control unit 166 is realized, for example, by a combination of feedforward control and feedback control.
- the steering control unit 166 executes a combination of feedforward control according to the curvature of the road ahead of the vehicle M and feedback control based on the deviation from the target trajectory.
- the driving force output device 200 outputs a driving force (torque) to the drive wheels for the vehicle to travel.
- the driving force output device 200 comprises, for example, a combination of an internal combustion engine, an electric motor, and a transmission, and an ECU (Electronic Control Unit) that controls these.
- the ECU controls the above configuration according to information input from the second control unit 160 or information input from the driving operator 80.
- the brake device 210 includes, for example, a brake caliper, a cylinder that transmits hydraulic pressure to the brake caliper, an electric motor that generates hydraulic pressure in the cylinder, and a brake ECU.
- the brake ECU controls the electric motor according to information input from the second control unit 160 or information input from the driving operator 80, so that a brake torque corresponding to the braking operation is output to each wheel.
- the brake device 210 may include a backup mechanism that transmits hydraulic pressure generated by operating the brake pedal included in the driving operator 80 to the cylinder via a master cylinder. Note that the brake device 210 is not limited to the configuration described above, and may be an electronically controlled hydraulic brake device that controls an actuator according to information input from the second control unit 160 to transmit hydraulic pressure from the master cylinder to the cylinder.
- the steering device 220 includes, for example, a steering ECU and an electric motor.
- the electric motor changes the direction of the steered wheels by, for example, applying a force to a rack and pinion mechanism.
- the steering ECU drives the electric motor according to information input from the second control unit 160 or information input from the driving operator 80, to change the direction of the steered wheels.
- Fig. 3 is a diagram showing an example of the configuration of the recognition unit 130.
- the recognition unit 130 includes, for example, a first recognition unit 132, a second recognition unit 134, a fusion unit 136, and a future position prediction/risk setting unit 138.
- the first recognition unit 132 repeatedly outputs the object's position, which is the result of processing to recognize the object's position, in a first period.
- the second recognition unit 134 repeatedly outputs the object's speed, which is the result of processing to recognize the object's speed, in a second period shorter than the first period. In other words, the second recognition unit 134 performs processing at a higher speed than the first recognition unit 132. For example, the second recognition unit 134 performs processing at a frequency several times to several tens of times higher than the first recognition unit 132.
- the second recognition unit 134 has a function of provisionally recognizing the presence and position of an object whose position has not been output by the first recognition unit 132. There are no particular restrictions on this function, and it is sufficient that it is a process that is simpler and has a lower load than the process performed by the first recognition unit 132. As an example, this function is realized by a process that combines a contour extraction process and a size recognition process. This function may be stopped when the first recognition unit 132 starts to output the position, or may be continued even after the first recognition unit 132 starts to output the position.
- the fusion unit 136 When the first recognition unit 132 outputs a position and the second recognition unit 134 outputs a speed (and position), the fusion unit 136 outputs the position output by the first recognition unit 132 and the speed output by the second recognition unit 134 as the state of the object.
- the first recognition unit 132 does not output a position, but the second recognition unit 134 outputs a position (provisionally recognized as described above) and speed, the fusion unit 136 outputs the position and speed output by the second recognition unit 134 as the state of the object.
- the first recognition unit 132 and the second recognition unit 134 each perform processing by executing at least a part of a common processing procedure.
- the first recognition unit 132 and the second recognition unit 134 each input an image captured by the camera 10 (or an image that has been preprocessed) into a trained model such as a DNN (Deep Neural Network) to recognize and output the position, speed, etc. of an object, and the second recognition unit 134 compresses the input image to a lower resolution than the image input to the first recognition unit 132, thereby performing processing faster than the first recognition unit 132.
- Figure 4 is a diagram showing an example of the relationship between the first recognition unit 132 and the second recognition unit 134. In this case, the input image input to the first recognition unit 132 is thinned out in time, resulting in a smaller number of frames than the input image input to the second recognition unit 134.
- FIG. 5 is a diagram showing an example of the operation of the first recognition unit 132 and the second recognition unit 134 in a certain scene.
- the second recognition unit 134 performs processing at a frequency three times that of the first recognition unit 132.
- the control timing is a virtual time that arrives at a predetermined period (which is the same as the processing period of the second recognition unit 134 in this embodiment).
- a predetermined period which is the same as the processing period of the second recognition unit 134 in this embodiment.
- the control timing is assumed to start from 1.
- the second recognition unit 134 provisionally recognizes the presence and position of the object for the first time.
- the second recognition unit 134 also recognizes the speed of the object.
- the first recognition unit 132 or the second recognition unit 134 (or another functional unit) predicts the position and speed of the object at the next control timing 2, for example, by calculating a covariance matrix generated by arranging the states calculated before that control timing, its eigenvalues, and an interpolated value for future expansion (next prediction). This next prediction process is repeatedly executed at each control timing.
- the state output by the recognition unit 130 is defined as an object whose position has been provisionally recognized and whose speed has been definitively recognized.
- the degree of control over the object is changed depending on whether it has been provisionally recognized or definitively recognized. For example, for an object whose position has been provisionally recognized, the risk is calculated to be smaller than for an object whose position has been definitively recognized, and the degree of control is relaxed.
- the second recognition unit 134 updates the object's position and speed based on the result of the next prediction at control timing 1 and the result of processing the input image.
- the first recognition unit 132 updates the object's position based on the result of the next prediction at control timing 2 and the result of processing the input image.
- the second recognition unit 134 updates the object's speed based on the result of the next prediction at control timing 2 and the result of processing the input image. At this point, it is defined that the presence of the object is confirmed, and the object's position is also confirmed to be recognized definitively.
- the second recognition unit 134 updates the object's speed based on the result of the next prediction at control timing 3 and the result of processing the input image.
- the second recognition unit 134 updates the object's speed based on the result of the next prediction at control timing 4 and the result of processing the input image.
- the first recognition unit 132 updates the object's position based on the result of the next prediction at control timing 5 and the result of processing the input image.
- the second recognition unit 134 updates the object's speed based on the result of the next prediction at control timing 5 and the result of processing the input image. After that, the same processing as at control timings 4 to 6 is repeatedly executed.
- the future position prediction/risk setting unit 138 predicts the future position of the object based on the state (position, speed) of the object output by the fusion unit 136, and sets a risk for the object.
- FIG. 6 is a diagram for explaining the processing of the future position prediction/risk setting unit 138.
- the future position prediction/risk setting unit 138 sets risk, which is an index value indicating the degree to which the vehicle M should not enter or approach, on the assumed plane S, which is a virtual plane that represents the space around the vehicle M as a two-dimensional plane viewed from above. The higher the risk value, the more likely the vehicle M should not enter or approach, and the closer the value is to zero, the more favorable it is for the vehicle M to travel. However, this relationship may be reversed. If the moving object is a flying object such as a drone rather than a vehicle, the future position prediction/risk setting unit 138 may perform similar processing in three-dimensional space rather than on the assumed plane S.
- the future position prediction/risk setting unit 138 sets the risk on the imaginary plane S not only for the current time, but also for the future positions of objects predicted in advance and specified at regular time intervals, such as the current time t, ⁇ t later (time t + ⁇ t), 2 ⁇ t later (time t + 2 ⁇ t), etc.
- the future position prediction/risk setting unit 138 sets the risk of vehicles, pedestrians, bicycles, and other traffic participants (moving targets) on the assumed plane S, with ellipses or circles based on the direction of travel and speed as contour lines, and sets a fixed value of risk for impassable areas.
- DM is the direction of travel of vehicle M.
- R(M1) is the risk of stopped vehicle M1
- R(P) is the risk of pedestrian P. Since pedestrian P is moving in a direction crossing the road, a risk is set at a position different from the current time for each future point in time. The same applies to moving vehicles, bicycles, and the like.
- R(BD) is the risk of impassable areas BD. In the figure, the darkness of the hatching indicates the risk value, and the darker the hatching, the greater the risk.
- the future position prediction/risk setting unit 138 may set the risk so that the value increases the further away from the center of the lane.
- the recognition unit 130 and the automatic driving control device 100 can quickly follow the speed change of the object.
- the second recognition unit 134 performs processing using a compressed input image, so the overall processing time can be shortened, and processing can be performed faster than the first recognition unit 132, which performs processing using a high-resolution image.
- the advantage of processing using a high-resolution image is that it can accurately detect objects that are mainly far away from the vehicle M, but it is known that there is not much performance difference for objects that are close to the vehicle M.
- the second recognition unit 134 which mainly recognizes the speed, processes at a high speed, so sudden changes in the speed of the object can be detected quickly. Therefore, it is possible to quickly follow the speed change of the object, and ultimately to quickly control the behavior of the vehicle M.
- Each of the first recognition unit 132 and the second recognition unit 134 is realized, for example, by one or more processors performing processing as the first recognition unit 132 and processing as the second recognition unit 134 in a time-division manner. The same may be true for the fusion unit 136 and the future position prediction/risk setting unit 138. Alternatively, each of the first recognition unit 132 and the second recognition unit 134 may be realized by separate processors performing processing. The same may be true for the fusion unit 136 and the future position prediction/risk setting unit 138.
- the second embodiment differs from the first embodiment in that the first recognition unit 132 starts outputting the position when it is able to recognize the position of the object, and outputs both the position and the velocity after it recognizes the velocity of the object. Furthermore, the second embodiment differs from the first embodiment in that after the position of the object is recognized by the first recognition unit 132, the position of the object is updated based on the velocity recognized by the second recognition unit 134 at a control timing when the first recognition unit 132 does not operate.
- the same reference numerals and names are used for configurations having the same functions as those in the first embodiment, and a detailed description thereof will be omitted.
- FIG. 7 is a diagram showing an example of the operation of the first recognition unit 132 and the second recognition unit 134 in a certain scene according to the second embodiment.
- the second recognition unit 134 performs processing at a frequency three times that of the first recognition unit 132.
- the second recognition unit 134 provisionally recognizes the presence and position of an object for the first time.
- the second recognition unit 134 also recognizes the speed of the object. That is, the position and speed are recognized based on the object recognition result ("high-speed object recognition result") by the second recognition unit 134, which performs object recognition at high speed.
- the second recognition unit 134 (or another functional unit) predicts the position and speed of the object at the next control timing 2, for example, by calculating a covariance matrix generated by arranging the states calculated before the control timing in question, its eigenvalues, and an interpolated value for future expansion (next prediction). This next prediction process is repeatedly executed at each control timing.
- the state output by the recognition unit 130 is defined as tentatively recognized for the object position and definitively recognized for the speed.
- the second recognition unit 134 updates the object's position and speed based on the result of the next prediction at control timing 1 and the result of processing the input image.
- Control timing 3 is the timing when the first recognition unit 132 outputs for the first time.
- the first recognition unit 132 updates the object position based on the result of the next prediction at control timing 2 and the result of processing the input image. That is, the object position is updated based on the object recognition result by the first recognition unit 132 performing object recognition at normal speed (the "normal object recognition result").
- the second recognition unit 134 updates the object speed based on the result of the next prediction at control timing 2 and the result of processing the input image. That is, the object speed is updated based on the high-speed object recognition result by the second recognition unit 134. At this time, it is defined that the existence of the object is confirmed, and the object position is also confirmed to be recognized definitively.
- the second recognition unit 134 updates the object's speed based on the result of the next prediction at control timing 3 and the result of processing the input image. Furthermore, the second recognition unit 134 updates the object's position based on the speed. For example, the second recognition unit 134 updates the object's position based on linear interpolation.
- the second recognition unit 134 estimates the current object's position by taking the object's position recognized by the first recognition unit 132 at control timing 3 as a reference position and adding the object's movement amount calculated using linear interpolation (the movement amount from control timing 3 to control timing 4) to this reference position, and updates the object's position to the estimated position.
- the linear interpolation uses a covariance matrix (position and speed history information) generated by arranging the states calculated before the control timing in question, and its eigenvalues, etc.
- the second recognition unit 134 updates the object's speed based on the result of the next prediction at control timing 4 and the result of processing the input image. Furthermore, the second recognition unit 134 updates the object's position based on the speed. For example, the second recognition unit 134 updates the object's position by linear interpolation. The second recognition unit 134 estimates the current object's position by taking the object's position updated at control timing 3 as a reference position and adding the object's movement amount calculated using linear interpolation (movement amount from control timing 3 to control timing 5) to this reference position, and updates the object's position at the estimated position.
- the second recognition unit 134 estimates the current object's position by taking the object's position updated at control timing 4 as a reference position and adding the object's movement amount calculated using linear interpolation (movement amount from control timing 4 to control timing 5) to this reference position, and updates the object's position at the estimated position.
- the second recognition unit 134 updates the first position by performing linear interpolation based on the historical information of the positions and velocities of objects that have been recognized in the past.
- Control timing 6 is the timing at which the first recognition unit 132 performs the second or subsequent output.
- the first recognition unit 132 updates the object's position based on the result of the next prediction at control timing 5 and the result of processing the input image.
- information on the object's position recognized by the first recognition unit 132 is accumulated, making it possible to estimate the speed based on the position information. Therefore, the first recognition unit 132 updates the object's speed based on the recognition result of the object's position previously recognized (for example, the recognition result at control timing 3) and the result of processing the input image. Thereafter, the same processing as at control timings 4 to 6 is repeatedly executed.
- the first recognition unit 132 has a function of repeatedly outputting the position and velocity of an object in a first period.
- the second recognition unit 134 has a function of repeatedly outputting the position and velocity of an object in a second period.
- the fusion unit 136 outputs the position output by the first recognition unit 132 and the velocity output by the second recognition unit 134 as the state of the object.
- the fusion unit 136 outputs the position and velocity output by the first recognition unit 132 as the state of the object.
- the second recognition unit 134 updates the object's position based on the speed, thereby improving the accuracy of estimating the object's position, and effectively reducing the impact of position errors on the fusion unit to improve fusion performance.
- the third embodiment differs from the first and second embodiments in that the first recognition unit 132 does not output anything until it recognizes the speed, and outputs both the speed and the position after it recognizes the speed.
- the first recognition unit 132 does not output anything until it recognizes the speed, and outputs both the speed and the position after it recognizes the speed.
- components having the same functions as those in the first and second embodiments are given the same reference numerals and names, and a detailed description thereof will be omitted.
- FIG. 8 is a diagram showing an example of the operation of the first recognition unit 132 and the second recognition unit 134 in a certain scene according to the third embodiment.
- the second recognition unit 134 performs processing at a frequency three times that of the first recognition unit 132.
- the second recognition unit 134 provisionally recognizes the presence and position of an object for the first time.
- the second recognition unit 134 also recognizes the speed of the object.
- the second recognition unit 134 (or another functional unit) predicts the position and speed of the object at the next control timing 2, for example, by calculating a covariance matrix generated by arranging the states calculated before that control timing, its eigenvalues, and an interpolated value for future expansion (next prediction). This next prediction process is repeatedly executed at each control timing.
- the state output by the recognition unit 130 is defined as tentatively recognized for the object's position and definitively recognized for its speed.
- the second recognition unit 134 updates the object's position and speed based on the result of the next prediction at control timing 1 and the result of processing the input image.
- Control timing 3 is also the timing at which the first recognition unit 132 operates for the first time.
- the first recognition unit 132 recognizes the object's position based on the result of the next prediction at control timing 2 and the result of processing the input image, but does not output the recognized position, but stores it in memory (not shown).
- the second recognition unit 134 updates the object's position and speed based on the result of the next prediction at control timing 3 and the result of processing the input image.
- the second recognition unit 134 updates the object's position and speed based on the result of the next prediction at control timing 4 and the result of processing the input image.
- Control timing 6 is the timing when the first recognition unit 132 outputs for the first time.
- the first recognition unit 132 updates the object's position based on the result of the next prediction at control timing 5 and the result of processing the input image.
- the existence of the object is confirmed, and it is defined that the object's position has also been confirmed as being confirmed.
- information on the object's position recognized by the first recognition unit 132 is accumulated, making it possible to estimate the speed based on the information on that position. Therefore, the object's speed is updated based on the recognition result of the object's position previously recognized by the first recognition unit 132 (for example, the recognition result at control timing 3) and the result of processing the input image.
- the second recognition unit 134 updates the object's speed based on the result of the next prediction at control timing 6 and the result of processing the input image. Furthermore, the second recognition unit 134 updates the object's position based on the speed. For example, the second recognition unit 134 updates the object's position by linear interpolation. The second recognition unit 134 estimates the current object's position by setting the object's position updated at control timing 6 as a reference position and adding the object's movement amount calculated using linear interpolation (the movement amount from control timing 6 to control timing 7) to this reference position, and updates the object's position to the estimated position.
- the second recognition unit 134 updates the object's speed based on the result of the next prediction at control timing 7 and the result of processing the input image. Furthermore, the second recognition unit 134 updates the object's position based on the speed. For example, the second recognition unit 134 updates the object's position by linear interpolation. The second recognition unit 134 estimates the current object's position by taking the object's position updated at control timing 6 as a reference position and adding the object's movement amount calculated using linear interpolation (the movement amount from control timing 6 to control timing 8) to this reference position, and updates the object's position at the estimated position.
- the second recognition unit 134 estimates the current object's position by taking the object's position updated at control timing 7 as a reference position and adding the object's movement amount calculated using linear interpolation (the movement amount from control timing 7 to control timing 8) to this reference position, and updates the object's position at the estimated position.
- Control timing 9 is the timing at which the first recognition unit 132 performs the second output.
- the first recognition unit 132 updates the object's position based on the result of the next prediction at control timing 8 and the result of processing the input image.
- the first recognition unit 132 also updates the object's speed based on the recognition result of the object's position previously recognized by the first recognition unit 132 (for example, the recognition result at control timings 3 and 6) and the result of processing the input image.
- the fusion unit 136 outputs the position and speed output by the first recognition unit 132 as the object's state. Thereafter, the same processes as those at control timings 7 to 9 are repeatedly executed.
- the third embodiment described above it is possible to quickly track changes in the speed of an object, and therefore to quickly control the behavior of the moving body. Furthermore, after information on the object's position recognized by the first recognition unit 132 is accumulated and it becomes possible to estimate the speed, at the control timing when the first recognition unit 132 operates, the speed of the object is updated based on the speed recognized by the first recognition unit 132. This makes it possible to improve the accuracy of estimating the object's position and speed. Furthermore, it is possible to maintain design diversity in cases such as when the second recognition unit 134 is incorporated into an existing system that only has the first recognition unit 132.
- the fourth embodiment differs from the first to third embodiments in that, at the control timing when the first recognition unit 132 operates for the first time, the speed of the object is updated based on the recognition result by the second recognition unit 134 at the previous control timing immediately preceding the control timing in question and the recognition result by the first recognition unit 132 at the control timing in question.
- components having the same functions as those in the first to third embodiments are given the same reference numerals and names, and a detailed description thereof will be omitted.
- FIG. 9 is a diagram showing an example of the operation of the first recognition unit 132 and the second recognition unit 134 in a certain scene according to the fourth embodiment.
- the second recognition unit 134 performs processing at a frequency three times that of the first recognition unit 132.
- the second recognition unit 134 provisionally recognizes the presence and position of an object for the first time.
- the second recognition unit 134 also recognizes the speed of the object.
- the second recognition unit 134 (or another functional unit) predicts the position and speed of the object at the next control timing 2, for example, by calculating a covariance matrix generated by arranging the states calculated before that control timing, its eigenvalues, and an interpolated value for future expansion (next prediction). This next prediction process is repeatedly executed at each control timing.
- the state output by the recognition unit 130 is defined as tentatively recognized for the object's position and definitively recognized for its speed.
- the second recognition unit 134 updates the object's position and speed based on the result of the next prediction at control timing 1 and the result of processing the input image.
- Control timing 3 is the timing when the first recognition unit 132 outputs for the first time.
- the first recognition unit 132 updates the object's position based on the result of the next prediction at control timing 2 and the result of processing the input image. At this time, it is defined that the existence of the object is confirmed and the object's position is also confirmed.
- the first recognition unit 132 updates the object's speed based on the recognition result by the second recognition unit 134 at control timing 2 (the position of the previous high-speed object recognition result) and the recognition result by the first recognition unit 132 at control timing 3 (the position of the current normal object recognition result).
- the fusion unit 136 outputs, as the state of the object, the position output by the first recognition unit 132 and the speed output by the first recognition unit 132 based on the position output by the first recognition unit 132 and the position output by the second recognition unit 134 at the previous timing before that timing.
- the second recognition unit 134 updates the object's speed based on the result of the next prediction at control timing 3 and the result of processing the input image. Furthermore, the second recognition unit 134 updates the object's position based on the speed. For example, the second recognition unit 134 updates the object's position by linear interpolation. The second recognition unit 134 estimates the current object's position by setting the object's position recognized by the first recognition unit 132 at control timing 3 as a reference position and adding the object's movement amount calculated using linear interpolation (the movement amount from control timing 3 to control timing 4) to this reference position, and updates the object's position to the estimated position.
- the second recognition unit 134 updates the object's speed based on the result of the next prediction at control timing 4 and the result of processing the input image. Furthermore, the second recognition unit 134 updates the object's position based on the speed. For example, the second recognition unit 134 updates the object's position by linear interpolation. The second recognition unit 134 estimates the current object's position by taking the object's position updated at control timing 3 as a reference position and adding the object's movement amount calculated using linear interpolation (movement amount from control timing 3 to control timing 5) to this reference position, and updates the object's position at the estimated position.
- the second recognition unit 134 estimates the current object's position by taking the object's position updated at control timing 4 as a reference position and adding the object's movement amount calculated using linear interpolation (movement amount from control timing 4 to control timing 5) to this reference position, and updates the object's position at the estimated position.
- Control timing 6 is the timing at which the first recognition unit 132 performs the second or subsequent output.
- the first recognition unit 132 updates the object's position based on the result of the next prediction at control timing 5 and the result of processing the input image.
- information on the object's position recognized by the first recognition unit 132 is accumulated, making it possible to estimate the speed based on the position information. Therefore, the first recognition unit 132 updates the object's speed based on the recognition result of the object's position previously recognized (for example, the recognition result at control timing 3) and the result of processing the input image. Thereafter, the same processing as at control timings 4 to 6 is repeatedly executed.
- the speed of the object is updated based on the recognition result by the second recognition unit 134 at the previous timing and the recognition result by the first recognition unit 132 at the current control timing. This makes it possible to maintain design diversity when, for example, incorporating the second recognition unit 134 into an existing system equipped only with the first recognition unit 132.
- control device is described as being mounted on the vehicle M (i.e., the moving body), but this is not limited thereto.
- the control device may be installed at a location away from the moving body, and may acquire output data from the camera 10, radar device 12, etc. through communication and transmit drive instruction signals to the moving body, i.e., may remotely control the moving body.
- the embodiments described above are merely examples, and the present invention is not limited to the configurations of these embodiments. It is also possible to combine the functions or configurations included in each embodiment as appropriate. For example, as described in the second to fourth embodiments, it is also possible to incorporate into the first embodiment a configuration in which, after the position of an object is recognized by the first recognition unit 132, the position of the object is updated based on the speed recognized by the second recognition unit 134 at a control timing when the first recognition unit 132 is not operating.
- a recognition device that recognizes a position and a speed of an object present around a moving body based on an output of a detection device for detecting a surrounding situation of the moving body, one or more storage media storing computer-readable instructions; a processor coupled to the one or more storage media; The processor executes the computer-readable instructions to: repeatedly outputting the position of the object, which is a result of performing processing for recognizing the position of the object, in a first period; repeatedly outputting the velocity of the object, which is a result of performing a process for recognizing the velocity of the object, at a second period shorter than the first period; Control device.
Landscapes
- Engineering & Computer Science (AREA)
- Automation & Control Theory (AREA)
- Transportation (AREA)
- Mechanical Engineering (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Traffic Control Systems (AREA)
Abstract
A recognition device that recognizes the position and the speed of an object present around a moving body on the basis of the output of a detection device for detecting the conditions around the moving body, the recognition device comprising: a first recognition unit that repeatedly outputs the position of the object, which is a result of performing a process for recognizing the position of the object, at a first cycle; and a second recognition unit that repeatedly outputs the speed of the object, which is a result of performing a process for recognizing the speed of the object, at a second cycle shorter than the first cycle.
Description
本発明は、認識装置、移動体の制御装置、認識方法、およびプログラムに関する。
The present invention relates to a recognition device, a control device for a moving object, a recognition method, and a program.
近年、車両などの移動体を自律的に移動させることについて研究および実用化が進められている。係る技術において、殆どの場合、移動体の周辺に存在する物体を認識する技術(以下、周辺認識技術)が必要不可欠である。周辺認識技術に関するものとして、車両の現在所在の運転シーンにおける運転シーンデータと運転行為データに基づいて、所定の安定走行条件を満たしているか否かを確定し、所定の安定走行条件を満たしている場合に、所定のサンプリング周波数よりも低い周波数で運転シーンデータと運転行為データを採集することにより、類似のシーン、類似の運転モードにおけるデータの冗長を低減する発明が開示されている(特許文献1)。
In recent years, there has been much research and practical application into autonomously moving vehicles and other moving bodies. In most cases, this technology requires technology to recognize objects that exist around the moving body (hereinafter, surrounding recognition technology). One invention relating to surrounding recognition technology is disclosed in which, based on driving scene data and driving behavior data in the driving scene in which the vehicle is currently located, it is determined whether or not a certain stable driving condition is met, and if the certain stable driving condition is met, driving scene data and driving behavior data are collected at a frequency lower than a certain sampling frequency, thereby reducing data redundancy in similar scenes and similar driving modes (Patent Document 1).
周辺認識技術を実現する上で、プロセッサの負荷を低減することと、急な事象の発生に速やかに追従することを両立しなければならない。上記従来の技術では、移動体の環境を総合的に判断する反面、局所的な変化に速やかに追従することができないことが懸念される。具体的には、従来の技術では、物体の速度変化に速やかに追従することができない場合があり得る。
In order to realize surrounding recognition technology, it is necessary to balance reducing the load on the processor with being able to quickly track sudden events. While the above conventional technology is able to comprehensively assess the environment of a moving object, there are concerns that it cannot quickly track local changes. Specifically, conventional technology may not be able to quickly track changes in the speed of an object.
本発明は、このような事情を考慮してなされたものであり、物体の速度変化に速やかに追従することができる認識装置、移動体の制御装置、認識方法、およびプログラムを提供することを目的の一つとする。
The present invention was made in consideration of these circumstances, and one of its objectives is to provide a recognition device, a control device for a moving object, a recognition method, and a program that can quickly track changes in the speed of an object.
この発明に係る認識装置、移動体の制御装置、認識方法、およびプログラムは、以下の構成を採用した。
(1):この発明の一態様に係る認識装置は、移動体の周辺状況を検知するための検知デバイスの出力に基づいて、前記移動体の周辺に存在する物体の位置および速度を認識する認識装置であって、前記物体の位置を認識するための処理を行った結果である前記物体の位置を第1周期で繰り返し出力する第1認識部と、前記物体の速度を認識するための処理を行った結果である前記物体の速度を前記第1周期よりも短い第2周期で繰り返し出力する第2認識部と、を備えるものである。 The recognition device, the control device for a moving object, the recognition method, and the program according to the present invention employ the following configuration.
(1): A recognition device according to one embodiment of the present invention is a recognition device that recognizes a position and speed of an object present in the vicinity of a moving body based on the output of a detection device for detecting the surrounding conditions of the moving body, and includes a first recognition unit that repeatedly outputs the position of the object, which is a result of processing for recognizing the position of the object, in a first period, and a second recognition unit that repeatedly outputs the speed of the object, which is a result of processing for recognizing the speed of the object, in a second period that is shorter than the first period.
(1):この発明の一態様に係る認識装置は、移動体の周辺状況を検知するための検知デバイスの出力に基づいて、前記移動体の周辺に存在する物体の位置および速度を認識する認識装置であって、前記物体の位置を認識するための処理を行った結果である前記物体の位置を第1周期で繰り返し出力する第1認識部と、前記物体の速度を認識するための処理を行った結果である前記物体の速度を前記第1周期よりも短い第2周期で繰り返し出力する第2認識部と、を備えるものである。 The recognition device, the control device for a moving object, the recognition method, and the program according to the present invention employ the following configuration.
(1): A recognition device according to one embodiment of the present invention is a recognition device that recognizes a position and speed of an object present in the vicinity of a moving body based on the output of a detection device for detecting the surrounding conditions of the moving body, and includes a first recognition unit that repeatedly outputs the position of the object, which is a result of processing for recognizing the position of the object, in a first period, and a second recognition unit that repeatedly outputs the speed of the object, which is a result of processing for recognizing the speed of the object, in a second period that is shorter than the first period.
(2):上記(1)の態様において、前記第2認識部は、前記第1認識部により位置が出力されていない物体の位置を暫定的に認識する機能を有し、前記第1認識部から位置が、前記第2認識部から速度が、それぞれ出力されている場合、前記第1認識部が出力した位置と前記第2認識部が出力した速度を当該物体の状態として出力し、前記第1認識部から位置が出力されておらず、前記第2認識部から位置および速度が出力されている物体について、前記第2認識部が出力した位置および速度を当該物体の状態として出力するフュージョン部を更に備えるものである。
(2): In the above aspect (1), the second recognition unit has a function of provisionally recognizing the position of an object whose position has not been output by the first recognition unit, and when a position is output from the first recognition unit and a velocity is output from the second recognition unit, the second recognition unit outputs the position output by the first recognition unit and the velocity output by the second recognition unit as the state of the object, and further includes a fusion unit that outputs the position and velocity output by the second recognition unit as the state of the object for an object whose position has not been output by the first recognition unit and whose position and velocity have been output by the second recognition unit.
(3):上記(2)の態様において、前記第2認識部は、前記第1認識部により位置が出力されていない物体の存在を認識する機能を有するものである。
(3): In the above aspect (2), the second recognition unit has a function of recognizing the presence of an object whose position has not been output by the first recognition unit.
(4):上記(1)の態様において、前記第1認識部と前記第2認識部は、少なくとも一部が共通する処理手順を実行することで、それぞれ処理を行うものである。
(4): In the above aspect (1), the first recognition unit and the second recognition unit each perform processing by executing a processing procedure that is at least partially common to both units.
(5):上記(1)の態様において、前記第1認識部と前記第2認識部のそれぞれは、一または複数のプロセッサが、前記第1認識部としての処理と前記第2認識部としての処理を時分割で行うことで実現されるものである。
(5): In the above aspect (1), each of the first recognition unit and the second recognition unit is realized by one or more processors performing the processing as the first recognition unit and the processing as the second recognition unit in a time-division manner.
(6):上記(1)の態様において、前記第1認識部と前記第2認識部のそれぞれは、別体のプロセッサが処理を行うことで実現されるものである。
(6): In the above aspect (1), the first recognition unit and the second recognition unit are each realized by a separate processor performing processing.
(7):上記(1)の態様において、前記第1認識部は、前記物体の位置および速度を前記第1周期で繰り返し出力する機能を有し、前記第2認識部は、前記物体の位置および速度を前記第2周期で繰り返し出力する機能を有し、前記第1認識部が初めて出力を行うタイミングでは、前記第1認識部が出力した位置と前記第2認識部が出力した速度を前記物体の状態として出力するフュージョン部を更に備えるものである。
(7): In the above aspect (1), the first recognition unit has a function of repeatedly outputting the position and velocity of the object in the first period, the second recognition unit has a function of repeatedly outputting the position and velocity of the object in the second period, and further includes a fusion unit that outputs the position output by the first recognition unit and the velocity output by the second recognition unit as the state of the object when the first recognition unit first outputs.
(8):上記(7)の態様において、前記フュージョン部は、前記第1認識部が2回目以降の出力を行うタイミングでは、前記第1認識部が出力した位置および速度を前記物体の状態として出力するものである。
(8): In the above aspect (7), the fusion unit outputs the position and velocity output by the first recognition unit as the state of the object when the first recognition unit outputs for the second or subsequent time.
(9):上記(1)の態様において、前記第1認識部は、前記物体の位置および速度を前記第1周期で繰り返し出力する機能を有し、前記第1認識部が出力を行うタイミングでは、前記第1認識部が出力した位置および速度を前記物体の状態として出力するフュージョン部を更に備えるものである。
(9): In the above aspect (1), the first recognition unit has a function of repeatedly outputting the position and velocity of the object in the first period, and further includes a fusion unit that outputs the position and velocity output by the first recognition unit as the state of the object at the timing when the first recognition unit outputs the position and velocity.
(10):上記(1)の態様において、前記第1認識部は、前記物体の位置および速度を前記第1周期で繰り返し出力する機能を有し、前記第2認識部は、前記物体の位置および速度を前記第2周期で繰り返し出力する機能を有し、前記第1認識部が初めて出力を行うタイミングでは、前記第1認識部が出力した位置と、前記第1認識部が出力した位置および前記タイミングの前の前回タイミングで前記第2認識部が出力した位置に基づいて前記第1認識部が出力した速度とを前記物体の状態として出力するフュージョン部を更に備えるものである。
(10): In the above aspect (1), the first recognition unit has a function of repeatedly outputting the position and velocity of the object in the first period, and the second recognition unit has a function of repeatedly outputting the position and velocity of the object in the second period, and further includes a fusion unit that, at the timing when the first recognition unit performs an output for the first time, outputs as the state of the object the position output by the first recognition unit and the velocity output by the first recognition unit based on the position output by the first recognition unit and the position output by the second recognition unit at the previous timing before the timing.
(11):上記(1)から(10)のいずれかの態様において、前記第1認識部が第1タイミングで第1位置の出力を行った後、前記第1認識部が出力を行わず且つ前記第2認識部が出力を行う第2タイミングにおいて、前記第2認識部は、過去に認識された前記物体の位置および速度の履歴情報に基づく線形補間を行うことにより、前記第1位置を更新するものである。
(11): In any of the above aspects (1) to (10), after the first recognition unit outputs the first position at a first timing, at a second timing when the first recognition unit does not output and the second recognition unit outputs, the second recognition unit updates the first position by performing linear interpolation based on historical information of the positions and velocities of the object that have been recognized in the past.
(12):本発明の他の態様に係る移動体の制御装置は、上記(1)の態様の認識装置と、前記認識装置が状態を出力した物体への接近を回避するように前記移動体を移動させる運転制御部と、を備えるものである。
(12): A control device for a moving body according to another aspect of the present invention includes the recognition device according to aspect (1) above, and a driving control unit that moves the moving body so as to avoid approaching an object whose state has been output by the recognition device.
(13):本発明の他の態様に係る認識方法は、移動体の周辺状況を検知するための検知デバイスの出力に基づいて、前記移動体の周辺に存在する物体の位置および速度を認識する認識装置を用いて実行される認識方法であって、前記物体の位置を認識するための処理を行った結果である前記物体の位置を第1周期で繰り返し出力することと、前記物体の速度を認識するための処理を行った結果である前記物体の速度を前記第1周期よりも短い第2周期で繰り返し出力することと、を備えるものである。
(13): A recognition method according to another aspect of the present invention is a recognition method executed by a recognition device that recognizes the position and speed of an object present around a moving body based on the output of a detection device for detecting the surrounding conditions of the moving body, and includes repeatedly outputting the position of the object, which is a result of a process for recognizing the position of the object, in a first period, and repeatedly outputting the speed of the object, which is a result of a process for recognizing the speed of the object, in a second period shorter than the first period.
(14):本発明の他の態様に係るプログラムは、移動体の周辺状況を検知するための検知デバイスの出力に基づいて、前記移動体の周辺に存在する物体の位置および速度を認識する認識装置のプロセッサに、前記物体の位置を認識するための処理を行った結果である前記物体の位置を第1周期で繰り返し出力することと、前記物体の速度を認識するための処理を行った結果である前記物体の速度を前記第1周期よりも短い第2周期で繰り返し出力することと、を実行させるためのプログラムである。
(14): A program according to another aspect of the present invention is a program for causing a processor of a recognition device that recognizes the position and speed of an object present around a moving body based on the output of a detection device for detecting the surrounding conditions of the moving body to repeatedly output the position of the object, which is a result of processing for recognizing the position of the object, in a first period, and repeatedly output the speed of the object, which is a result of processing for recognizing the speed of the object, in a second period shorter than the first period.
(1)~(14)の態様によれば、物体の速度変化に速やかに追従することができる。
According to aspects (1) to (14), it is possible to quickly track changes in the speed of an object.
[概要]
以下、図面を参照し、本発明の認識装置、移動体の制御装置、認識方法、およびプログラムの実施形態について説明する。認識装置は、移動体の周辺に存在する物体の少なくとも位置および速度を認識する。認識装置は、例えば移動体に搭載されるものであるが、移動体の外部に設置されてもよい。移動体の制御装置は、移動体の駆動装置を制御して移動体を移動させるものである。移動体とは、四輪や二輪などの車両、マイクロモビリティ、或いはドローンや航空機、船舶など、機械的動力を用いて空間を移動可能なあらゆるものを指す。移動体は人や動物を搭載して移動するものでもよいし、無人で移動するものでもよい。以下の説明では、移動体は人を搭載して路上を移動する車両であるものとし、制御装置の一例として「自動運転制御装置」を例に挙げて説明する。 [overview]
Hereinafter, with reference to the drawings, an embodiment of a recognition device, a control device for a moving body, a recognition method, and a program of the present invention will be described. The recognition device recognizes at least the position and speed of an object present around the moving body. The recognition device is mounted on the moving body, for example, but may be installed outside the moving body. The control device for the moving body controls the drive device of the moving body to move the moving body. The moving body refers to any object that can move in space using mechanical power, such as a four-wheeled or two-wheeled vehicle, micromobility, or a drone, aircraft, or ship. The moving body may move with a person or animal on board, or may be an unmanned moving body. In the following description, the moving body is assumed to be a vehicle that moves on the road with a person on board, and an "automatic driving control device" is taken as an example of a control device.
以下、図面を参照し、本発明の認識装置、移動体の制御装置、認識方法、およびプログラムの実施形態について説明する。認識装置は、移動体の周辺に存在する物体の少なくとも位置および速度を認識する。認識装置は、例えば移動体に搭載されるものであるが、移動体の外部に設置されてもよい。移動体の制御装置は、移動体の駆動装置を制御して移動体を移動させるものである。移動体とは、四輪や二輪などの車両、マイクロモビリティ、或いはドローンや航空機、船舶など、機械的動力を用いて空間を移動可能なあらゆるものを指す。移動体は人や動物を搭載して移動するものでもよいし、無人で移動するものでもよい。以下の説明では、移動体は人を搭載して路上を移動する車両であるものとし、制御装置の一例として「自動運転制御装置」を例に挙げて説明する。 [overview]
Hereinafter, with reference to the drawings, an embodiment of a recognition device, a control device for a moving body, a recognition method, and a program of the present invention will be described. The recognition device recognizes at least the position and speed of an object present around the moving body. The recognition device is mounted on the moving body, for example, but may be installed outside the moving body. The control device for the moving body controls the drive device of the moving body to move the moving body. The moving body refers to any object that can move in space using mechanical power, such as a four-wheeled or two-wheeled vehicle, micromobility, or a drone, aircraft, or ship. The moving body may move with a person or animal on board, or may be an unmanned moving body. In the following description, the moving body is assumed to be a vehicle that moves on the road with a person on board, and an "automatic driving control device" is taken as an example of a control device.
<第1の実施形態>
[全体構成]
図1は、第1の実施形態に係る認識装置および移動体の制御装置を利用した車両システム1の構成図である。車両システム1が搭載される車両は、例えば、二輪や三輪、四輪等の車両であり、その駆動源は、ディーゼルエンジンやガソリンエンジンなどの内燃機関、電動機、或いはこれらの組み合わせである。電動機は、内燃機関に連結された発電機による発電電力、或いは二次電池や燃料電池の放電電力を使用して動作する。 First Embodiment
[Overall configuration]
1 is a configuration diagram of avehicle system 1 that uses a recognition device and a mobile object control device according to the first embodiment. The vehicle on which the vehicle system 1 is mounted is, for example, a two-wheeled, three-wheeled, or four-wheeled vehicle, and its drive source is an internal combustion engine such as a diesel engine or a gasoline engine, an electric motor, or a combination of these. The electric motor operates using power generated by a generator connected to the internal combustion engine, or discharged power from a secondary battery or a fuel cell.
[全体構成]
図1は、第1の実施形態に係る認識装置および移動体の制御装置を利用した車両システム1の構成図である。車両システム1が搭載される車両は、例えば、二輪や三輪、四輪等の車両であり、その駆動源は、ディーゼルエンジンやガソリンエンジンなどの内燃機関、電動機、或いはこれらの組み合わせである。電動機は、内燃機関に連結された発電機による発電電力、或いは二次電池や燃料電池の放電電力を使用して動作する。 First Embodiment
[Overall configuration]
1 is a configuration diagram of a
車両システム1は、例えば、カメラ10と、レーダ装置12と、LIDAR(Light Detection and Ranging)14と、物体認識装置16と、通信装置20と、HMI(Human Machine Interface)30と、車両センサ40と、ナビゲーション装置50と、MPU(Map Positioning Unit)60と、運転操作子80と、自動運転制御装置100と、走行駆動力出力装置200と、ブレーキ装置210と、ステアリング装置220とを備える。これらの装置や機器は、CAN(Controller Area Network)通信線等の多重通信線やシリアル通信線、無線通信網等によって互いに接続される。なお、図1に示す構成はあくまで一例であり、構成の一部が省略されてもよいし、更に別の構成が追加されてもよい。
The vehicle system 1 includes, for example, a camera 10, a radar device 12, a LIDAR (Light Detection and Ranging) 14, an object recognition device 16, a communication device 20, an HMI (Human Machine Interface) 30, a vehicle sensor 40, a navigation device 50, an MPU (Map Positioning Unit) 60, a driving operator 80, an automatic driving control device 100, a driving force output device 200, a braking device 210, and a steering device 220. These devices and equipment are connected to each other by multiple communication lines such as a CAN (Controller Area Network) communication line, serial communication lines, a wireless communication network, etc. Note that the configuration shown in FIG. 1 is merely an example, and some of the configuration may be omitted, or other configurations may be added.
カメラ10は、例えば、CCD(Charge Coupled Device)やCMOS(Complementary Metal Oxide Semiconductor)等の固体撮像素子を利用したデジタルカメラである。カメラ10は、車両システム1が搭載される車両(以下、車両M)の任意の箇所に取り付けられる。前方を撮像する場合、カメラ10は、フロントウインドシールド上部やルームミラー裏面等に取り付けられる。カメラ10は、例えば、周期的に繰り返し車両Mの周辺を撮像する。カメラ10は、ステレオカメラであってもよい。
The camera 10 is, for example, a digital camera that uses a solid-state imaging element such as a CCD (Charge Coupled Device) or a CMOS (Complementary Metal Oxide Semiconductor). The camera 10 is attached at any location of the vehicle (hereinafter, vehicle M) in which the vehicle system 1 is mounted. When capturing an image of the front, the camera 10 is attached to the top of the front windshield, the back of the rearview mirror, or the like. The camera 10, for example, periodically and repeatedly captures images of the surroundings of the vehicle M. The camera 10 may be a stereo camera.
レーダ装置12は、車両Mの周辺にミリ波などの電波を放射すると共に、物体によって反射された電波(反射波)を検出して少なくとも物体の位置(距離および方位)を検出する。レーダ装置12は、車両Mの任意の箇所に取り付けられる。レーダ装置12は、FM-CW(Frequency Modulated Continuous Wave)方式によって物体の位置および速度を検出してもよい。
The radar device 12 emits radio waves such as millimeter waves around the vehicle M and detects radio waves reflected by objects (reflected waves) to detect at least the position (distance and direction) of the object. The radar device 12 is attached to any location on the vehicle M. The radar device 12 may detect the position and speed of an object using the FM-CW (Frequency Modulated Continuous Wave) method.
LIDAR14は、車両Mの周辺に光(或いは光に近い波長の電磁波)を照射し、散乱光を測定する。LIDAR14は、発光から受光までの時間に基づいて、対象までの距離を検出する。照射される光は、例えば、パルス状のレーザー光である。LIDAR14は、車両Mの任意の箇所に取り付けられる。
The LIDAR 14 irradiates light (or electromagnetic waves with a wavelength close to that of light) around the vehicle M and measures the scattered light. The LIDAR 14 detects the distance to the target based on the time between emitting and receiving the light. The irradiated light is, for example, a pulsed laser light. The LIDAR 14 can be attached to any location on the vehicle M.
物体認識装置16は、カメラ10、レーダ装置12、およびLIDAR14のうち一部または全部による検出結果に対してセンサフュージョン処理を行って、物体の位置、種類、速度などを認識する。物体認識装置16は、認識結果を自動運転制御装置100に出力する。物体認識装置16は、カメラ10、レーダ装置12、およびLIDAR14の検出結果をそのまま自動運転制御装置100に出力してよい。車両システム1から物体認識装置16が省略されてもよい。
The object recognition device 16 performs sensor fusion processing on the detection results from some or all of the camera 10, radar device 12, and LIDAR 14 to recognize the position, type, speed, etc. of the object. The object recognition device 16 outputs the recognition results to the autonomous driving control device 100. The object recognition device 16 may output the detection results from the camera 10, radar device 12, and LIDAR 14 directly to the autonomous driving control device 100. The object recognition device 16 may be omitted from the vehicle system 1.
通信装置20は、例えば、セルラー網やWi-Fi網、Bluetooth(登録商標)、DSRC(Dedicated Short Range Communication)などを利用して、車両Mの周辺に存在する他車両と通信し、或いは無線基地局を介して各種サーバ装置と通信する。
The communication device 20 communicates with other vehicles in the vicinity of the vehicle M using, for example, a cellular network, a Wi-Fi network, Bluetooth (registered trademark), or DSRC (Dedicated Short Range Communication), or communicates with various server devices via a wireless base station.
HMI30は、車両Mの乗員に対して各種情報を提示すると共に、乗員による入力操作を受け付ける。HMI30は、各種表示装置、スピーカ、ブザー、タッチパネル、スイッチ、キーなどを含む。
The HMI 30 presents various information to the occupants of the vehicle M and accepts input operations by the occupants. The HMI 30 includes various display devices, speakers, buzzers, touch panels, switches, keys, etc.
車両センサ40は、車両Mの速度を検出する車速センサ、加速度を検出する加速度センサ、鉛直軸回りの角速度を検出するヨーレートセンサ、車両Mの向きを検出する方位センサ等を含む。
The vehicle sensor 40 includes a vehicle speed sensor that detects the speed of the vehicle M, an acceleration sensor that detects the acceleration, a yaw rate sensor that detects the angular velocity around the vertical axis, a direction sensor that detects the direction of the vehicle M, etc.
ナビゲーション装置50は、例えば、GNSS(Global Navigation Satellite System)受信機51と、ナビHMI52と、経路決定部53とを備える。ナビゲーション装置50は、HDD(Hard Disk Drive)やフラッシュメモリなどの記憶装置に第1地図情報54を保持している。GNSS受信機51は、GNSS衛星から受信した信号に基づいて、車両Mの位置を特定する。車両Mの位置は、車両センサ40の出力を利用したINS(Inertial Navigation System)によって特定または補完されてもよい。ナビHMI52は、表示装置、スピーカ、タッチパネル、キーなどを含む。ナビHMI52は、前述したHMI30と一部または全部が共通化されてもよい。経路決定部53は、例えば、GNSS受信機51により特定された車両Mの位置(或いは入力された任意の位置)から、ナビHMI52を用いて乗員により入力された目的地までの経路(以下、地図上経路)を、第1地図情報54を参照して決定する。第1地図情報54は、例えば、道路を示すリンクと、リンクによって接続されたノードとによって道路形状が表現された情報である。第1地図情報54は、道路の曲率やPOI(Point Of Interest)情報などを含んでもよい。地図上経路は、MPU60に出力される。ナビゲーション装置50は、地図上経路に基づいて、ナビHMI52を用いた経路案内を行ってもよい。ナビゲーション装置50は、例えば、乗員の保有するスマートフォンやタブレット端末等の端末装置の機能によって実現されてもよい。ナビゲーション装置50は、通信装置20を介してナビゲーションサーバに現在位置と目的地を送信し、ナビゲーションサーバから地図上経路と同等の経路を取得してもよい。
The navigation device 50 includes, for example, a GNSS (Global Navigation Satellite System) receiver 51, a navigation HMI 52, and a route determination unit 53. The navigation device 50 stores first map information 54 in a storage device such as a HDD (Hard Disk Drive) or flash memory. The GNSS receiver 51 determines the position of the vehicle M based on signals received from GNSS satellites. The position of the vehicle M may be determined or supplemented by an INS (Inertial Navigation System) that uses the output of the vehicle sensor 40. The navigation HMI 52 includes a display device, a speaker, a touch panel, keys, etc. The navigation HMI 52 may be partially or completely shared with the HMI 30 described above. The route determination unit 53 determines a route (hereinafter, a route on a map) from the position of the vehicle M specified by the GNSS receiver 51 (or any input position) to a destination input by the occupant using the navigation HMI 52, for example, by referring to the first map information 54. The first map information 54 is, for example, information in which a road shape is expressed by links indicating roads and nodes connected by the links. The first map information 54 may include road curvature and POI (Point of Interest) information. The route on the map is output to the MPU 60. The navigation device 50 may perform route guidance using the navigation HMI 52 based on the route on the map. The navigation device 50 may be realized by the function of a terminal device such as a smartphone or tablet terminal owned by the occupant. The navigation device 50 may transmit the current position and the destination to a navigation server via the communication device 20, and obtain a route equivalent to the route on the map from the navigation server.
MPU60は、例えば、推奨車線決定部61を含み、HDDやフラッシュメモリなどの記憶装置に第2地図情報62を保持している。推奨車線決定部61は、ナビゲーション装置50から提供された地図上経路を複数のブロックに分割し(例えば、車両進行方向に関して100[m]毎に分割し)、第2地図情報62を参照してブロックごとに推奨車線を決定する。推奨車線決定部61は、左から何番目の車線を走行するといった決定を行う。推奨車線決定部61は、地図上経路に分岐箇所が存在する場合、車両Mが、分岐先に進行するための合理的な経路を走行できるように、推奨車線を決定する。
The MPU 60 includes, for example, a recommended lane determination unit 61, and stores second map information 62 in a storage device such as an HDD or flash memory. The recommended lane determination unit 61 divides the route on the map provided by the navigation device 50 into a number of blocks (for example, every 100 m in the vehicle travel direction), and determines a recommended lane for each block by referring to the second map information 62. The recommended lane determination unit 61 determines, for example, which lane from the left to use. When there is a branch on the route on the map, the recommended lane determination unit 61 determines a recommended lane so that the vehicle M can use a reasonable route to proceed to the branch destination.
第2地図情報62は、第1地図情報54よりも高精度な地図情報である。第2地図情報62は、例えば、車線の中央の情報あるいは車線の境界の情報等を含んでいる。また、第2地図情報62には、道路情報、交通規制情報、住所情報(住所・郵便番号)、施設情報、電話番号情報、後述するモードAまたはモードBが禁止される禁止区間の情報などが含まれてよい。第2地図情報62は、通信装置20が他装置と通信することにより、随時、アップデートされてよい。
The second map information 62 is map information with higher accuracy than the first map information 54. The second map information 62 includes, for example, information on the center of lanes or information on lane boundaries. The second map information 62 may also include road information, traffic regulation information, address information (address and postal code), facility information, telephone number information, information on prohibited sections where mode A or mode B described below is prohibited, and the like. The second map information 62 may be updated at any time by the communication device 20 communicating with other devices.
運転操作子80は、例えば、ステアリングホイール、アクセルペダル、ブレーキペダル、シフトレバー、その他の操作子を含む。運転操作子80には、操作量あるいは操作の有無を検出するセンサが取り付けられており、その検出結果は、自動運転制御装置100、もしくは、走行駆動力出力装置200、ブレーキ装置210、およびステアリング装置220のうち一部または全部に出力される。
The driving operators 80 include, for example, a steering wheel, an accelerator pedal, a brake pedal, a shift lever, and other operators. The driving operators 80 are fitted with sensors that detect the amount of operation or the presence or absence of operation, and the detection results are output to the automatic driving control device 100, or some or all of the driving force output device 200, the brake device 210, and the steering device 220.
自動運転制御装置100は、例えば、第1制御部120と、第2制御部160とを備える。第1制御部120と第2制御部160は、それぞれ、例えば、CPU(Central Processing Unit)などのハードウェアプロセッサがプログラム(ソフトウェア)を実行することにより実現される。また、これらの構成要素のうち一部または全部は、LSI(Large Scale Integration)やASIC(Application Specific Integrated Circuit)、FPGA(Field-Programmable Gate Array)、GPU(Graphics Processing Unit)などのハードウェア(回路部;circuitryを含む)によって実現されてもよいし、ソフトウェアとハードウェアの協働によって実現されてもよい。プログラムは、予め自動運転制御装置100のHDDやフラッシュメモリなどの記憶装置(非一過性の記憶媒体を備える記憶装置)に格納されていてもよいし、DVDやCD-ROMなどの着脱可能な記憶媒体に格納されており、記憶媒体(非一過性の記憶媒体)がドライブ装置に装着されることで自動運転制御装置100のHDDやフラッシュメモリにインストールされてもよい。
The automatic driving control device 100 includes, for example, a first control unit 120 and a second control unit 160. The first control unit 120 and the second control unit 160 are each realized by, for example, a hardware processor such as a CPU (Central Processing Unit) executing a program (software). Furthermore, some or all of these components may be realized by hardware (including circuitry) such as an LSI (Large Scale Integration), ASIC (Application Specific Integrated Circuit), FPGA (Field-Programmable Gate Array), or GPU (Graphics Processing Unit), or may be realized by collaboration between software and hardware. The program may be stored in advance in a storage device (a storage device with a non-transient storage medium) such as the HDD or flash memory of the autonomous driving control device 100, or may be stored in a removable storage medium such as a DVD or CD-ROM, and installed in the HDD or flash memory of the autonomous driving control device 100 by mounting the storage medium (non-transient storage medium) in a drive device.
自動運転制御装置100は「移動体の制御装置」の一例であり、認識部130は「認識装置」の一例であり、行動計画生成部140と第2制御部160を合わせたものが「運転制御部」の一例である。
The autonomous driving control device 100 is an example of a "control device for a moving body", the recognition unit 130 is an example of a "recognition device", and the combination of the action plan generation unit 140 and the second control unit 160 is an example of a "driving control unit".
図2は、第1制御部120および第2制御部160の機能構成図である。第1制御部120は、例えば、認識部130と、行動計画生成部140とを備える。第1制御部120は、例えば、AI(Artificial Intelligence;人工知能)による機能と、予め与えられたモデルによる機能とを並行して実現する。例えば、「交差点を認識する」機能は、ディープラーニング等による交差点の認識と、予め与えられた条件(パターンマッチング可能な信号、道路標示などがある)に基づく認識とが並行して実行され、双方に対してスコア付けして総合的に評価することで実現されてよい。これによって、自動運転の信頼性が担保される。
FIG. 2 is a functional configuration diagram of the first control unit 120 and the second control unit 160. The first control unit 120 includes, for example, a recognition unit 130 and an action plan generation unit 140. The first control unit 120 realizes, for example, a function based on AI (Artificial Intelligence) and a function based on a pre-given model in parallel. For example, the "intersection recognition" function may be realized by executing in parallel the recognition of the intersection using deep learning or the like and the recognition based on pre-given conditions (such as traffic lights and road markings that can be pattern matched), and by scoring both and evaluating them comprehensively. This ensures the reliability of autonomous driving.
認識部130は、カメラ10、レーダ装置12、およびLIDAR14から物体認識装置16を介して入力された情報に基づいて、車両Mの周辺にある物体の存在、位置、および速度、加速度等の状態を認識する。物体の位置は、例えば、車両Mの代表点(重心や駆動軸中心など)を原点とした絶対座標上の位置として認識され、制御に使用される。物体の位置は、その物体の重心やコーナー等の代表点で表されてもよいし、領域で表されてもよい。物体の「状態」とは、物体の加速度やジャーク、あるいは「行動状態」(例えば車線変更をしている、またはしようとしているか否か)を含んでもよい。
The recognition unit 130 recognizes the presence, position, and state of the speed, acceleration, etc. of objects around the vehicle M based on information input from the camera 10, the radar device 12, and the LIDAR 14 via the object recognition device 16. The position of the object is recognized as a position on an absolute coordinate system with a representative point of the vehicle M (such as the center of gravity or the center of the drive shaft) as the origin, and is used for control. The position of the object may be represented by a representative point such as the center of gravity or a corner of the object, or may be represented by an area. The "state" of the object may include the acceleration or jerk of the object, or the "behavioral state" (for example, whether or not the object is changing lanes or is about to change lanes).
また、認識部130は、例えば、車両Mが走行している車線(走行車線)を認識する。例えば、認識部130は、第2地図情報62から得られる道路区画線のパターン(例えば実線と破線の配列)と、カメラ10によって撮像された画像から認識される車両Mの周辺の道路区画線のパターンとを比較することで、走行車線を認識する。なお、認識部130は、道路区画線に限らず、道路区画線や路肩、縁石、中央分離帯、ガードレールなどを含む走路境界(道路境界)を認識することで、走行車線を認識してもよい。この認識において、ナビゲーション装置50から取得される車両Mの位置やINSによる処理結果が加味されてもよい。また、認識部130は、一時停止線、障害物、赤信号、料金所、その他の道路事象を認識する。
The recognition unit 130 also recognizes, for example, the lane in which the vehicle M is traveling (the driving lane). For example, the recognition unit 130 recognizes the driving lane by comparing the pattern of road dividing lines (for example, an arrangement of solid and dashed lines) obtained from the second map information 62 with the pattern of road dividing lines around the vehicle M recognized from the image captured by the camera 10. Note that the recognition unit 130 may recognize the driving lane by recognizing road boundaries (road boundaries) including not only road dividing lines but also road dividing lines, shoulders, curbs, medians, guard rails, etc. In this recognition, the position of the vehicle M obtained from the navigation device 50 and the processing results by the INS may be taken into account. The recognition unit 130 also recognizes stop lines, obstacles, red lights, toll booths, and other road phenomena.
認識部130は、走行車線を認識する際に、走行車線に対する車両Mの位置や姿勢を認識する。認識部130は、例えば、車両Mの基準点の車線中央からの乖離、および車両Mの進行方向の車線中央を連ねた線に対してなす角度を、走行車線に対する車両Mの相対位置および姿勢として認識してもよい。これに代えて、認識部130は、走行車線のいずれかの側端部(道路区画線または道路境界)に対する車両Mの基準点の位置などを、走行車線に対する車両Mの相対位置として認識してもよい。
When recognizing the driving lane, the recognition unit 130 recognizes the position and attitude of the vehicle M with respect to the driving lane. For example, the recognition unit 130 may recognize the deviation of the reference point of the vehicle M from the center of the lane and the angle with respect to a line connecting the centers of the lanes in the direction of travel of the vehicle M as the relative position and attitude of the vehicle M with respect to the driving lane. Alternatively, the recognition unit 130 may recognize the position of the reference point of the vehicle M with respect to either side end of the driving lane (road division line or road boundary) as the relative position of the vehicle M with respect to the driving lane.
行動計画生成部140は、原則的には推奨車線決定部61により決定された推奨車線を走行し、更に、認識部130により認識された物体(道路区画線、道路標示、マンホールなどの乗り越え可能なものを除く)への接近を回避するように、車両Mが自動的に(運転者の操作に依らずに)将来走行する目標軌道を生成する。例えば、認識部130は、状態を出力した物体を中心としたリスク領域を設定し、リスク領域内では認識部130により、車両Mが接近すべきでない度合いを示す指標値としてリスクが設定されている。行動計画生成部140は、リスクが所定値以上の地点を通過しないように目標軌道を生成する。物体には移動するものが含まれるため、リスクの分布は制御サイクルごとに一つでは無く、物体の速度に基づいて予測された物体の将来の位置を考慮し、将来の複数時点について設定されるものである。目標軌道は、例えば、速度要素を含んでいる。例えば、目標軌道は、車両Mの到達すべき地点(軌道点)を順に並べたものとして表現される。軌道点は、道なり距離で所定の走行距離(例えば数[m]程度)ごとの車両Mの到達すべき地点であり、それとは別に、所定のサンプリング時間(例えば0コンマ数[sec]程度)ごとの目標速度および目標加速度が、目標軌道の一部として生成される。また、軌道点は、所定のサンプリング時間ごとの、そのサンプリング時刻における車両Mの到達すべき位置であってもよい。この場合、目標速度や目標加速度の情報は軌道点の間隔で表現される。
The behavior plan generating unit 140 generates a target trajectory for the vehicle M to automatically (without the driver's operation) travel in the future so that, in principle, the vehicle travels in the recommended lane determined by the recommended lane determining unit 61 and avoids approaching objects (excluding road division lines, road markings, manholes, and other objects that can be climbed over) recognized by the recognition unit 130. For example, the recognition unit 130 sets a risk area centered on the object whose state has been output, and within the risk area, the recognition unit 130 sets a risk as an index value indicating the degree to which the vehicle M should not approach. The behavior plan generating unit 140 generates a target trajectory so that the vehicle M does not pass through a point where the risk is equal to or greater than a predetermined value. Since the objects include moving objects, the distribution of risk is not one for each control cycle, but is set for multiple future time points taking into account the future position of the object predicted based on the speed of the object. The target trajectory includes, for example, a speed element. For example, the target trajectory is expressed as a sequence of points (trajectory points) to be reached by the vehicle M. A trajectory point is a point where the vehicle M should reach at each predetermined travel distance (e.g., about several meters) along a road, and separately, a target speed and target acceleration are generated as part of the target trajectory for each predetermined sampling time (e.g., about a few tenths of a second). A trajectory point may also be a position where the vehicle M should reach at each predetermined sampling time. In this case, the information on the target speed and target acceleration is expressed as the interval between trajectory points.
第2制御部160は、行動計画生成部140によって生成された目標軌道を、予定の時刻通りに車両Mが通過するように、走行駆動力出力装置200、ブレーキ装置210、およびステアリング装置220を制御する。
The second control unit 160 controls the driving force output device 200, the brake device 210, and the steering device 220 so that the vehicle M passes through the target trajectory generated by the action plan generation unit 140 at the scheduled time.
第2制御部160は、例えば、取得部162と、速度制御部164と、操舵制御部166とを備える。取得部162は、行動計画生成部140により生成された目標軌道(軌道点)の情報を取得し、メモリ(不図示)に記憶させる。速度制御部164は、メモリに記憶された目標軌道に付随する速度要素に基づいて、走行駆動力出力装置200またはブレーキ装置210を制御する。操舵制御部166は、メモリに記憶された目標軌道の曲がり具合に応じて、ステアリング装置220を制御する。速度制御部164および操舵制御部166の処理は、例えば、フィードフォワード制御とフィードバック制御との組み合わせにより実現される。一例として、操舵制御部166は、車両Mの前方の道路の曲率に応じたフィードフォワード制御と、目標軌道からの乖離に基づくフィードバック制御とを組み合わせて実行する。
The second control unit 160 includes, for example, an acquisition unit 162, a speed control unit 164, and a steering control unit 166. The acquisition unit 162 acquires information on the target trajectory (trajectory points) generated by the action plan generation unit 140, and stores it in a memory (not shown). The speed control unit 164 controls the driving force output device 200 or the brake device 210 based on the speed element associated with the target trajectory stored in the memory. The steering control unit 166 controls the steering device 220 according to the curvature of the target trajectory stored in the memory. The processing of the speed control unit 164 and the steering control unit 166 is realized, for example, by a combination of feedforward control and feedback control. As an example, the steering control unit 166 executes a combination of feedforward control according to the curvature of the road ahead of the vehicle M and feedback control based on the deviation from the target trajectory.
走行駆動力出力装置200は、車両が走行するための走行駆動力(トルク)を駆動輪に出力する。走行駆動力出力装置200は、例えば、内燃機関、電動機、および変速機などの組み合わせと、これらを制御するECU(Electronic Control Unit)とを備える。ECUは、第2制御部160から入力される情報、或いは運転操作子80から入力される情報に従って、上記の構成を制御する。
The driving force output device 200 outputs a driving force (torque) to the drive wheels for the vehicle to travel. The driving force output device 200 comprises, for example, a combination of an internal combustion engine, an electric motor, and a transmission, and an ECU (Electronic Control Unit) that controls these. The ECU controls the above configuration according to information input from the second control unit 160 or information input from the driving operator 80.
ブレーキ装置210は、例えば、ブレーキキャリパーと、ブレーキキャリパーに油圧を伝達するシリンダと、シリンダに油圧を発生させる電動モータと、ブレーキECUとを備える。ブレーキECUは、第2制御部160から入力される情報、或いは運転操作子80から入力される情報に従って電動モータを制御し、制動操作に応じたブレーキトルクが各車輪に出力されるようにする。ブレーキ装置210は、運転操作子80に含まれるブレーキペダルの操作によって発生させた油圧を、マスターシリンダを介してシリンダに伝達する機構をバックアップとして備えてよい。なお、ブレーキ装置210は、上記説明した構成に限らず、第2制御部160から入力される情報に従ってアクチュエータを制御して、マスターシリンダの油圧をシリンダに伝達する電子制御式油圧ブレーキ装置であってもよい。
The brake device 210 includes, for example, a brake caliper, a cylinder that transmits hydraulic pressure to the brake caliper, an electric motor that generates hydraulic pressure in the cylinder, and a brake ECU. The brake ECU controls the electric motor according to information input from the second control unit 160 or information input from the driving operator 80, so that a brake torque corresponding to the braking operation is output to each wheel. The brake device 210 may include a backup mechanism that transmits hydraulic pressure generated by operating the brake pedal included in the driving operator 80 to the cylinder via a master cylinder. Note that the brake device 210 is not limited to the configuration described above, and may be an electronically controlled hydraulic brake device that controls an actuator according to information input from the second control unit 160 to transmit hydraulic pressure from the master cylinder to the cylinder.
ステアリング装置220は、例えば、ステアリングECUと、電動モータとを備える。電動モータは、例えば、ラックアンドピニオン機構に力を作用させて転舵輪の向きを変更する。ステアリングECUは、第2制御部160から入力される情報、或いは運転操作子80から入力される情報に従って、電動モータを駆動し、転舵輪の向きを変更させる。
The steering device 220 includes, for example, a steering ECU and an electric motor. The electric motor changes the direction of the steered wheels by, for example, applying a force to a rack and pinion mechanism. The steering ECU drives the electric motor according to information input from the second control unit 160 or information input from the driving operator 80, to change the direction of the steered wheels.
[認識部の更に詳細な構成について]
ここで、認識部130の更に詳細な構成について説明する。上記の機能を実現するために、認識部130は以下に説明するような構成を有する。図3は、認識部130の構成の一例を示す図である。認識部130は、例えば、第1認識部132と、第2認識部134と、フュージョン部136と、将来位置予測/リスク設定部138とを備える。 [More detailed configuration of the recognition unit]
Here, a more detailed configuration of therecognition unit 130 will be described. In order to realize the above functions, the recognition unit 130 has the configuration described below. Fig. 3 is a diagram showing an example of the configuration of the recognition unit 130. The recognition unit 130 includes, for example, a first recognition unit 132, a second recognition unit 134, a fusion unit 136, and a future position prediction/risk setting unit 138.
ここで、認識部130の更に詳細な構成について説明する。上記の機能を実現するために、認識部130は以下に説明するような構成を有する。図3は、認識部130の構成の一例を示す図である。認識部130は、例えば、第1認識部132と、第2認識部134と、フュージョン部136と、将来位置予測/リスク設定部138とを備える。 [More detailed configuration of the recognition unit]
Here, a more detailed configuration of the
第1認識部132は、物体の位置を認識するための処理を行った結果である物体の位置を第1周期で繰り返し出力する。
The first recognition unit 132 repeatedly outputs the object's position, which is the result of processing to recognize the object's position, in a first period.
第2認識部134は、物体の速度を認識するための処理を行った結果である物体の速度を、第1周期よりも短い第2周期で繰り返し出力する。つまり、第2認識部134は第1認識部132よりも高速で処理を行う。例えば、第2認識部134は第1認識部132よりも数倍~十数倍程度の周波数で処理を行う。第2認識部134は、第1認識部132により位置が出力されていない物体の存在および位置を暫定的に認識する機能を有する。係る機能については特段の制約は無く、第1認識部132が行う処理よりも簡易かつ低負荷な処理であればよい。一例として、輪郭抽出処理とサイズ認識処理を組み合わせた処理によって、係る機能が実現される。係る機能は、第1認識部132により位置が出力され始めると停止されてもよいし、第1認識部132により位置が出力され始めた後も継続されてもよい。
The second recognition unit 134 repeatedly outputs the object's speed, which is the result of processing to recognize the object's speed, in a second period shorter than the first period. In other words, the second recognition unit 134 performs processing at a higher speed than the first recognition unit 132. For example, the second recognition unit 134 performs processing at a frequency several times to several tens of times higher than the first recognition unit 132. The second recognition unit 134 has a function of provisionally recognizing the presence and position of an object whose position has not been output by the first recognition unit 132. There are no particular restrictions on this function, and it is sufficient that it is a process that is simpler and has a lower load than the process performed by the first recognition unit 132. As an example, this function is realized by a process that combines a contour extraction process and a size recognition process. This function may be stopped when the first recognition unit 132 starts to output the position, or may be continued even after the first recognition unit 132 starts to output the position.
フュージョン部136は、第1認識部132から位置が、第2認識部134から速度(および位置)が、それぞれ出力されている場合、第1認識部132が出力した位置と第2認識部134が出力した速度を当該物体の状態として出力し、第1認識部132から位置が出力されておらず、第2認識部134から位置(上記の通り暫定的に認識されたもの)および速度が出力されている物体について、第2認識部134が出力した位置および速度を当該物体の状態として出力する。
When the first recognition unit 132 outputs a position and the second recognition unit 134 outputs a speed (and position), the fusion unit 136 outputs the position output by the first recognition unit 132 and the speed output by the second recognition unit 134 as the state of the object. When the first recognition unit 132 does not output a position, but the second recognition unit 134 outputs a position (provisionally recognized as described above) and speed, the fusion unit 136 outputs the position and speed output by the second recognition unit 134 as the state of the object.
第1認識部132と第2認識部134は、少なくとも一部が共通する処理手順を実行することで、それぞれ処理を行うものである。例えば、第1認識部132と第2認識部134は、それぞれDNN(Deep Neural Network)などの学習済モデルにカメラ10が撮像した画像(或いはそれに前処理を行った画像)を入力することで物体の位置、速度などを認識して出力するものであり、第2認識部134は、入力される画像を、第1認識部132に入力される画像よりも低解像度に圧縮することで、結果として第1認識部132よりも高速に処理を行うものである。図4は、第1認識部132と第2認識部134の関係の一例を示す図である。なお、この場合、第1認識部132に入力される入力画像は、時間的に間引かれることで、第2認識部134に入力される入力画像よりもフレーム数が少なくなる。
The first recognition unit 132 and the second recognition unit 134 each perform processing by executing at least a part of a common processing procedure. For example, the first recognition unit 132 and the second recognition unit 134 each input an image captured by the camera 10 (or an image that has been preprocessed) into a trained model such as a DNN (Deep Neural Network) to recognize and output the position, speed, etc. of an object, and the second recognition unit 134 compresses the input image to a lower resolution than the image input to the first recognition unit 132, thereby performing processing faster than the first recognition unit 132. Figure 4 is a diagram showing an example of the relationship between the first recognition unit 132 and the second recognition unit 134. In this case, the input image input to the first recognition unit 132 is thinned out in time, resulting in a smaller number of frames than the input image input to the second recognition unit 134.
図5は、ある場面における第1認識部132と第2認識部134の動作の一例を示す図である。本図において第2認識部134は、第1認識部132の3倍の周波数で処理を行うものとする。制御タイミングとは、所定周期(本実施例においては第2認識部134の処理周期と同じである)ごとに到来する仮想的な時刻である。制御タイミングごとに、カメラ10の撮像した画像が繰り返し入力される。本図では便宜上、制御タイミングが1から開始されるものとした。
FIG. 5 is a diagram showing an example of the operation of the first recognition unit 132 and the second recognition unit 134 in a certain scene. In this diagram, the second recognition unit 134 performs processing at a frequency three times that of the first recognition unit 132. The control timing is a virtual time that arrives at a predetermined period (which is the same as the processing period of the second recognition unit 134 in this embodiment). At each control timing, an image captured by the camera 10 is repeatedly input. For convenience, in this diagram, the control timing is assumed to start from 1.
制御タイミング1において、第2認識部134が初めて物体の存在および位置を暫定的に認識する。第2認識部134は、合わせて物体の速度も認識する。このとき、第1認識部132または第2認識部134が(或いは他の機能部が)、次回の制御タイミング2における物体の位置および速度を、例えば当該回の制御タイミング以前に算出された状態を並べて生成した共分散行列、その固有値、および将来に展開するための補間値を計算することで予測しておく(次回予測)。この次回予測の処理は制御タイミングごとに繰り返し実行される。
At control timing 1, the second recognition unit 134 provisionally recognizes the presence and position of the object for the first time. The second recognition unit 134 also recognizes the speed of the object. At this time, the first recognition unit 132 or the second recognition unit 134 (or another functional unit) predicts the position and speed of the object at the next control timing 2, for example, by calculating a covariance matrix generated by arranging the states calculated before that control timing, its eigenvalues, and an interpolated value for future expansion (next prediction). This next prediction process is repeatedly executed at each control timing.
制御タイミング1において、認識部130の出力する状態の定義は、物体の位置については暫定認識、速度については確定的に認識されたと定義される。暫定認識であるか、確定的に認識されたかによって、その物体に対する制御の度合いが変更される。例えば、位置が暫定認識された物体については、位置が確定的に認識された物体に比してリスクを小さく計算することで、制御の度合いを緩めるといった処理が行われる。
At control timing 1, the state output by the recognition unit 130 is defined as an object whose position has been provisionally recognized and whose speed has been definitively recognized. The degree of control over the object is changed depending on whether it has been provisionally recognized or definitively recognized. For example, for an object whose position has been provisionally recognized, the risk is calculated to be smaller than for an object whose position has been definitively recognized, and the degree of control is relaxed.
制御タイミング2において、第2認識部134が制御タイミング1における次回予測の結果と、入力された画像を処理した結果とに基づいて、物体の位置と速度を更新する。
At control timing 2, the second recognition unit 134 updates the object's position and speed based on the result of the next prediction at control timing 1 and the result of processing the input image.
制御タイミング3において、第1認識部132が制御タイミング2における次回予測の結果と、入力された画像を処理した結果とに基づいて、物体の位置を更新する。また、第2認識部134が制御タイミング2における次回予測の結果と、入力された画像を処理した結果とに基づいて、物体の速度を更新する。このとき、物体の存在が確定し、物体の位置に関しても確定的に認識されたと定義される。
At control timing 3, the first recognition unit 132 updates the object's position based on the result of the next prediction at control timing 2 and the result of processing the input image. The second recognition unit 134 updates the object's speed based on the result of the next prediction at control timing 2 and the result of processing the input image. At this point, it is defined that the presence of the object is confirmed, and the object's position is also confirmed to be recognized definitively.
制御タイミング4において、第2認識部134が制御タイミング3における次回予測の結果と、入力された画像を処理した結果とに基づいて、物体の速度を更新する。
At control timing 4, the second recognition unit 134 updates the object's speed based on the result of the next prediction at control timing 3 and the result of processing the input image.
制御タイミング5において、第2認識部134が制御タイミング4における次回予測の結果と、入力された画像を処理した結果とに基づいて、物体の速度を更新する。
At control timing 5, the second recognition unit 134 updates the object's speed based on the result of the next prediction at control timing 4 and the result of processing the input image.
制御タイミング6において、第1認識部132が制御タイミング5における次回予測の結果と、入力された画像を処理した結果とに基づいて、物体の位置を更新する。また、第2認識部134が制御タイミング5における次回予測の結果と、入力された画像を処理した結果とに基づいて、物体の速度を更新する。以降、制御タイミング4~6と同様の処理が繰り返し実行される。
At control timing 6, the first recognition unit 132 updates the object's position based on the result of the next prediction at control timing 5 and the result of processing the input image. The second recognition unit 134 updates the object's speed based on the result of the next prediction at control timing 5 and the result of processing the input image. After that, the same processing as at control timings 4 to 6 is repeatedly executed.
将来位置予測/リスク設定部138は、フュージョン部136が出力する物体の状態(位置、速度)に基づいて物体の将来位置を予測し、物体についてリスクを設定する。図6は、将来位置予測/リスク設定部138の処理について説明するための図である。将来位置予測/リスク設定部138は、車両Mの周辺の空間を上空から見た二次元平面で表した仮想的な平面である想定平面Sにおいて、車両Mが進入ないし接近すべきでない度合いを示す指標値であるリスクを設定する。リスクは、値が大きいほど車両Mが進入ないし接近すべきでないことを示し、値がゼロに近いほど車両Mが走行するのに好ましいことを示すものとする。但し、この関係は逆でもよい。移動体が車両ではなくドローンなどの飛翔体である場合、将来位置予測/リスク設定部138は、想定平面Sではなく三次元空間において同様の処理を行ってよい。
The future position prediction/risk setting unit 138 predicts the future position of the object based on the state (position, speed) of the object output by the fusion unit 136, and sets a risk for the object. FIG. 6 is a diagram for explaining the processing of the future position prediction/risk setting unit 138. The future position prediction/risk setting unit 138 sets risk, which is an index value indicating the degree to which the vehicle M should not enter or approach, on the assumed plane S, which is a virtual plane that represents the space around the vehicle M as a two-dimensional plane viewed from above. The higher the risk value, the more likely the vehicle M should not enter or approach, and the closer the value is to zero, the more favorable it is for the vehicle M to travel. However, this relationship may be reversed. If the moving object is a flying object such as a drone rather than a vehicle, the future position prediction/risk setting unit 138 may perform similar processing in three-dimensional space rather than on the assumed plane S.
将来位置予測/リスク設定部138は、想定平面Sにおけるリスクを、現在時刻t、Δt後(時刻t+Δt)、2Δt後(時刻t+2Δt)、…というように、現時点だけでなく、一定の時間間隔で規定される、予め予測した物体の将来位置についても設定する。
The future position prediction/risk setting unit 138 sets the risk on the imaginary plane S not only for the current time, but also for the future positions of objects predicted in advance and specified at regular time intervals, such as the current time t, Δt later (time t + Δt), 2Δt later (time t + 2Δt), etc.
将来位置予測/リスク設定部138は、車両、歩行者、自転車などの交通参加者(移動物標)について、想定平面S上で、進行方向および速度に基づく楕円ないし円を等高線とするリスクを設定し、走行不可能領域について一定値のリスクを設定する。図中、DMは車両Mの進行方向である。R(M1)は停止車両M1のリスクであり、R(P)は歩行者Pのリスクである。歩行者Pは道路を横断する方向に移動しているので、将来の各時点について現在時刻とは異なる位置にリスクが設定される。移動している車両や自転車などについても同様である。R(BD)は走行不可能領域BDのリスクである。図中、ハッチングの濃さがリスクの値を示しており、ハッチングが濃いほどリスクが大きいことを示している。将来位置予測/リスク設定部138は、車線の中央から離れる程、値が大きくなるようにリスクを設定してもよい。
The future position prediction/risk setting unit 138 sets the risk of vehicles, pedestrians, bicycles, and other traffic participants (moving targets) on the assumed plane S, with ellipses or circles based on the direction of travel and speed as contour lines, and sets a fixed value of risk for impassable areas. In the figure, DM is the direction of travel of vehicle M. R(M1) is the risk of stopped vehicle M1, and R(P) is the risk of pedestrian P. Since pedestrian P is moving in a direction crossing the road, a risk is set at a position different from the current time for each future point in time. The same applies to moving vehicles, bicycles, and the like. R(BD) is the risk of impassable areas BD. In the figure, the darkness of the hatching indicates the risk value, and the darker the hatching, the greater the risk. The future position prediction/risk setting unit 138 may set the risk so that the value increases the further away from the center of the lane.
このような構成で処理を行うことで、認識部130および自動運転制御装置100は、物体の速度変化に速やかに追従することができる。図3~5で説明したように、第2認識部134は圧縮された入力画像を用いて処理を行うため、全体としての処理時間を短くすることができ、高解像度画像を用いて処理を行う第1認識部132よりも高速に処理を行うことができる。高解像度画像を用いた処理の利点は、主に車両Mから見て遠方の物体を精度よく検出できる点にあるが、車両Mの近くに存在する物体に関しては、それ程の性能差が出ないことが分かっている。一方で、物体の将来位置に基づいてリスクを設定するような処理を行う場合、物体の急な速度変化(例えば歩行者が急に走り出した、他車両が急停車した)といった事象に対しては速やかに追従することが望まれる。この点、実施例の認識部130では主に速度を認識する第2認識部134の方を高速処理にしているため、物体の急な速度変化を高速に検知することができる。従って、物体の速度変化に速やかに追従することができ、ひいては迅速に車両Mの挙動を制御することができる。
By performing processing in such a configuration, the recognition unit 130 and the automatic driving control device 100 can quickly follow the speed change of the object. As described in Figures 3 to 5, the second recognition unit 134 performs processing using a compressed input image, so the overall processing time can be shortened, and processing can be performed faster than the first recognition unit 132, which performs processing using a high-resolution image. The advantage of processing using a high-resolution image is that it can accurately detect objects that are mainly far away from the vehicle M, but it is known that there is not much performance difference for objects that are close to the vehicle M. On the other hand, when performing processing to set a risk based on the future position of an object, it is desirable to quickly follow events such as a sudden change in the speed of the object (for example, a pedestrian suddenly starts running, or another vehicle suddenly stops). In this regard, in the recognition unit 130 of the embodiment, the second recognition unit 134, which mainly recognizes the speed, processes at a high speed, so sudden changes in the speed of the object can be detected quickly. Therefore, it is possible to quickly follow the speed change of the object, and ultimately to quickly control the behavior of the vehicle M.
第1認識部132と第2認識部134のそれぞれは、例えば、一または複数のプロセッサが、第1認識部132としての処理と第2認識部134としての処理を時分割で行うことで実現される。フュージョン部136や将来位置予測/リスク設定部138についても同様であってよい。これに代えて、第1認識部132と第2認識部134のそれぞれは、別体のプロセッサが処理を行うことで実現されてもよい。フュージョン部136や将来位置予測/リスク設定部138についても同様であってよい。
Each of the first recognition unit 132 and the second recognition unit 134 is realized, for example, by one or more processors performing processing as the first recognition unit 132 and processing as the second recognition unit 134 in a time-division manner. The same may be true for the fusion unit 136 and the future position prediction/risk setting unit 138. Alternatively, each of the first recognition unit 132 and the second recognition unit 134 may be realized by separate processors performing processing. The same may be true for the fusion unit 136 and the future position prediction/risk setting unit 138.
<第2の実施形態>
次に、第2の実施形態について説明する。第2の実施形態は、第1認識部132が、物体の位置を認識できた時点で位置の出力を開始し、物体の速度を認識した後は、位置および速度の両方を出力する点において、第1の実施形態と異なる。さらに、第2の実施形態は、第1認識部132により物体の位置が認識された後、第1認識部132が動作しない制御タイミングにおいて、第2認識部134により認識された速度に基づいて物体の位置が更新される点において、第1の実施形態と異なる。以下の説明では、第1の実施形態と同様の機能を有する構成については、同一の符号および名称を付するものとし、その具体的な説明については省略する。 Second Embodiment
Next, the second embodiment will be described. The second embodiment differs from the first embodiment in that thefirst recognition unit 132 starts outputting the position when it is able to recognize the position of the object, and outputs both the position and the velocity after it recognizes the velocity of the object. Furthermore, the second embodiment differs from the first embodiment in that after the position of the object is recognized by the first recognition unit 132, the position of the object is updated based on the velocity recognized by the second recognition unit 134 at a control timing when the first recognition unit 132 does not operate. In the following description, the same reference numerals and names are used for configurations having the same functions as those in the first embodiment, and a detailed description thereof will be omitted.
次に、第2の実施形態について説明する。第2の実施形態は、第1認識部132が、物体の位置を認識できた時点で位置の出力を開始し、物体の速度を認識した後は、位置および速度の両方を出力する点において、第1の実施形態と異なる。さらに、第2の実施形態は、第1認識部132により物体の位置が認識された後、第1認識部132が動作しない制御タイミングにおいて、第2認識部134により認識された速度に基づいて物体の位置が更新される点において、第1の実施形態と異なる。以下の説明では、第1の実施形態と同様の機能を有する構成については、同一の符号および名称を付するものとし、その具体的な説明については省略する。 Second Embodiment
Next, the second embodiment will be described. The second embodiment differs from the first embodiment in that the
図7は、第2の実施形態に係るある場面における第1認識部132と第2認識部134の動作の一例を示す図である。本図において第2認識部134は、第1認識部132の3倍の周波数で処理を行うものとする。
FIG. 7 is a diagram showing an example of the operation of the first recognition unit 132 and the second recognition unit 134 in a certain scene according to the second embodiment. In this diagram, the second recognition unit 134 performs processing at a frequency three times that of the first recognition unit 132.
制御タイミング1において、第2認識部134が初めて物体の存在および位置を暫定的に認識する。第2認識部134は、合わせて物体の速度も認識する。すなわち、高速で物体認識を行う第2認識部134による物体認識結果(「高速物体認識結果」)に基づいて、位置および速度が認識される。このとき、第2認識部134が(或いは他の機能部が)、次回の制御タイミング2における物体の位置および速度を、例えば当該回の制御タイミング以前に算出された状態を並べて生成した共分散行列、その固有値、および将来に展開するための補間値を計算することで予測しておく(次回予測)。この次回予測の処理は制御タイミングごとに繰り返し実行される。制御タイミング1において、認識部130の出力する状態の定義は、物体の位置については暫定認識、速度については確定的に認識されたと定義される。
At control timing 1, the second recognition unit 134 provisionally recognizes the presence and position of an object for the first time. The second recognition unit 134 also recognizes the speed of the object. That is, the position and speed are recognized based on the object recognition result ("high-speed object recognition result") by the second recognition unit 134, which performs object recognition at high speed. At this time, the second recognition unit 134 (or another functional unit) predicts the position and speed of the object at the next control timing 2, for example, by calculating a covariance matrix generated by arranging the states calculated before the control timing in question, its eigenvalues, and an interpolated value for future expansion (next prediction). This next prediction process is repeatedly executed at each control timing. At control timing 1, the state output by the recognition unit 130 is defined as tentatively recognized for the object position and definitively recognized for the speed.
制御タイミング2において、第2認識部134が制御タイミング1における次回予測の結果と、入力された画像を処理した結果とに基づいて、物体の位置と速度を更新する。
At control timing 2, the second recognition unit 134 updates the object's position and speed based on the result of the next prediction at control timing 1 and the result of processing the input image.
制御タイミング3は、第1認識部132が初めて出力を行うタイミングである。この制御タイミング3において、第1認識部132が制御タイミング2における次回予測の結果と、入力された画像を処理した結果とに基づいて、物体の位置を更新する。すなわち、通常の速度で物体認識を行う第1認識部132による物体認識結果(「通常物体認識結果」)に基づいて、物体の位置が更新される。また、第2認識部134が制御タイミング2における次回予測の結果と、入力された画像を処理した結果とに基づいて、物体の速度を更新する。すなわち、第2認識部134による高速物体認識結果に基づいて、物体の速度が更新される。このとき、物体の存在が確定し、物体の位置に関しても確定的に認識されたと定義される。
Control timing 3 is the timing when the first recognition unit 132 outputs for the first time. At this control timing 3, the first recognition unit 132 updates the object position based on the result of the next prediction at control timing 2 and the result of processing the input image. That is, the object position is updated based on the object recognition result by the first recognition unit 132 performing object recognition at normal speed (the "normal object recognition result"). In addition, the second recognition unit 134 updates the object speed based on the result of the next prediction at control timing 2 and the result of processing the input image. That is, the object speed is updated based on the high-speed object recognition result by the second recognition unit 134. At this time, it is defined that the existence of the object is confirmed, and the object position is also confirmed to be recognized definitively.
制御タイミング4において、第2認識部134が制御タイミング3における次回予測の結果と、入力された画像を処理した結果とに基づいて、物体の速度を更新する。さらに、第2認識部134が速度に基づいて物体の位置を更新する。例えば、第2認識部134は、線形補間に基づいて物体の位置を更新する。第2認識部134が、制御タイミング3において第1認識部132により認識された物体の位置を基準位置として、この基準位置に、線形補間を用いて算出された物体の移動量(制御タイミング3から制御タイミング4までの移動量)を加算することで現在の物体の位置を推定し、推定した位置で物体の位置を更新する。線形補間には、例えば、当該回の制御タイミング以前に算出された状態を並べて生成した共分散行列(位置や速度の履歴情報)、その固有値等が用いられる。
At control timing 4, the second recognition unit 134 updates the object's speed based on the result of the next prediction at control timing 3 and the result of processing the input image. Furthermore, the second recognition unit 134 updates the object's position based on the speed. For example, the second recognition unit 134 updates the object's position based on linear interpolation. The second recognition unit 134 estimates the current object's position by taking the object's position recognized by the first recognition unit 132 at control timing 3 as a reference position and adding the object's movement amount calculated using linear interpolation (the movement amount from control timing 3 to control timing 4) to this reference position, and updates the object's position to the estimated position. For example, the linear interpolation uses a covariance matrix (position and speed history information) generated by arranging the states calculated before the control timing in question, and its eigenvalues, etc.
制御タイミング5において、第2認識部134が制御タイミング4における次回予測の結果と、入力された画像を処理した結果とに基づいて、物体の速度を更新する。さらに、第2認識部134が速度に基づいて物体の位置を更新する。例えば、第2認識部134は、線形補間により物体の位置を更新する。第2認識部134が、制御タイミング3において更新された物体の位置を基準位置として、この基準位置に、線形補間を用いて算出された物体の移動量(制御タイミング3から制御タイミング5までの移動量)を加算することで現在の物体の位置を推定し、推定した位置で物体の位置を更新する。あるいは、第2認識部134が、制御タイミング4において更新された物体の位置を基準位置として、この基準位置に、線形補間を用いて算出された物体の移動量(制御タイミング4から制御タイミング5までの移動量)を加算することで現在の物体の位置を推定し、推定した位置で物体の位置を更新する。
At control timing 5, the second recognition unit 134 updates the object's speed based on the result of the next prediction at control timing 4 and the result of processing the input image. Furthermore, the second recognition unit 134 updates the object's position based on the speed. For example, the second recognition unit 134 updates the object's position by linear interpolation. The second recognition unit 134 estimates the current object's position by taking the object's position updated at control timing 3 as a reference position and adding the object's movement amount calculated using linear interpolation (movement amount from control timing 3 to control timing 5) to this reference position, and updates the object's position at the estimated position. Alternatively, the second recognition unit 134 estimates the current object's position by taking the object's position updated at control timing 4 as a reference position and adding the object's movement amount calculated using linear interpolation (movement amount from control timing 4 to control timing 5) to this reference position, and updates the object's position at the estimated position.
すなわち、第1認識部132が第1タイミングで第1位置の出力を行った後、第1認識部132が出力を行わず且つ第2認識部134が出力を行う第2タイミングにおいて、第2認識部134は、過去に認識された物体の位置および速度の履歴情報に基づく線形補間を行うことにより、第1位置を更新する。
In other words, after the first recognition unit 132 outputs the first position at a first timing, at a second timing when the first recognition unit 132 does not output and the second recognition unit 134 outputs, the second recognition unit 134 updates the first position by performing linear interpolation based on the historical information of the positions and velocities of objects that have been recognized in the past.
制御タイミング6は、第1認識部132が2回目以降の出力を行うタイミングである。この制御タイミング6において、第1認識部132が制御タイミング5における次回予測の結果と、入力された画像を処理した結果とに基づいて、物体の位置を更新する。また、この制御タイミング6においては、第1認識部132により認識された物体の位置の情報が蓄積されており、当該位置の情報に基づく速度の推定が可能となっている。このため、第1認識部132が以前に認識した物体の位置の認識結果(例えば、制御タイミング3における認識結果)と、入力された画像を処理した結果とに基づいて、物体の速度を更新する。以降、制御タイミング4~6と同様の処理が繰り返し実行される。
Control timing 6 is the timing at which the first recognition unit 132 performs the second or subsequent output. At this control timing 6, the first recognition unit 132 updates the object's position based on the result of the next prediction at control timing 5 and the result of processing the input image. Also, at this control timing 6, information on the object's position recognized by the first recognition unit 132 is accumulated, making it possible to estimate the speed based on the position information. Therefore, the first recognition unit 132 updates the object's speed based on the recognition result of the object's position previously recognized (for example, the recognition result at control timing 3) and the result of processing the input image. Thereafter, the same processing as at control timings 4 to 6 is repeatedly executed.
すなわち、第1認識部132は、物体の位置および速度を第1周期で繰り返し出力する機能を有する。第2認識部134は、物体の位置および速度を第2周期で繰り返し出力する機能を有する。フュージョン部136は、第1認識部132が初めて出力を行うタイミングでは、第1認識部132が出力した位置と第2認識部134が出力した速度を物体の状態として出力する。また、フュージョン部136は、第1認識部132が2回目以降の出力を行うタイミングでは、第1認識部132が出力した位置および速度を物体の状態として出力する。
In other words, the first recognition unit 132 has a function of repeatedly outputting the position and velocity of an object in a first period. The second recognition unit 134 has a function of repeatedly outputting the position and velocity of an object in a second period. When the first recognition unit 132 performs an output for the first time, the fusion unit 136 outputs the position output by the first recognition unit 132 and the velocity output by the second recognition unit 134 as the state of the object. Furthermore, when the first recognition unit 132 performs an output for the second time or later, the fusion unit 136 outputs the position and velocity output by the first recognition unit 132 as the state of the object.
以上説明した第2の実施形態によれば、物体の速度変化に速やかに追従することができ、ひいては迅速に移動体の挙動を制御することができる。また、第1認識部132により物体の速度が認識された後は、第1認識部132により認識された位置および速度を物体の状態として出力するようにすることで、物体の位置および速度の推定精度を高めることができる。さらに、第1認識部132により物体の位置が認識された後、第1認識部132が動作しない制御タイミングにおいては、第2認識部134が速度に基づいて物体の位置を更新することで、物体の位置の推定精度を高めることができ、位置の誤差がフュージョン部に与える影響を効果的に低減してフュージョンのパフォーマンスを向上させることができる。
According to the second embodiment described above, it is possible to quickly track changes in the speed of an object, and therefore to quickly control the behavior of the moving object. Furthermore, after the speed of an object is recognized by the first recognition unit 132, the position and speed recognized by the first recognition unit 132 are output as the state of the object, thereby improving the accuracy of estimating the position and speed of the object. Furthermore, after the position of an object is recognized by the first recognition unit 132, at a control timing when the first recognition unit 132 is not operating, the second recognition unit 134 updates the object's position based on the speed, thereby improving the accuracy of estimating the object's position, and effectively reducing the impact of position errors on the fusion unit to improve fusion performance.
<第3の実施形態>
次に、第3の実施形態について説明する。第3の実施形態は、第1認識部132が、速度を認識するまでは何ら出力を行わず、速度を認識した後は速度および位置の両方を出力する点において、第1および2の実施形態と異なる。以下の説明では、第1および2の実施形態と同様の機能を有する構成については、同一の符号および名称を付するものとし、その具体的な説明については省略する。 Third Embodiment
Next, a third embodiment will be described. The third embodiment differs from the first and second embodiments in that thefirst recognition unit 132 does not output anything until it recognizes the speed, and outputs both the speed and the position after it recognizes the speed. In the following description, components having the same functions as those in the first and second embodiments are given the same reference numerals and names, and a detailed description thereof will be omitted.
次に、第3の実施形態について説明する。第3の実施形態は、第1認識部132が、速度を認識するまでは何ら出力を行わず、速度を認識した後は速度および位置の両方を出力する点において、第1および2の実施形態と異なる。以下の説明では、第1および2の実施形態と同様の機能を有する構成については、同一の符号および名称を付するものとし、その具体的な説明については省略する。 Third Embodiment
Next, a third embodiment will be described. The third embodiment differs from the first and second embodiments in that the
図8は、第3の実施形態に係るある場面における第1認識部132と第2認識部134の動作の一例を示す図である。本図において第2認識部134は、第1認識部132の3倍の周波数で処理を行うものとする。
FIG. 8 is a diagram showing an example of the operation of the first recognition unit 132 and the second recognition unit 134 in a certain scene according to the third embodiment. In this diagram, the second recognition unit 134 performs processing at a frequency three times that of the first recognition unit 132.
制御タイミング1において、第2認識部134が初めて物体の存在および位置を暫定的に認識する。第2認識部134は、合わせて物体の速度も認識する。このとき、第2認識部134が(或いは他の機能部が)、次回の制御タイミング2における物体の位置および速度を、例えば当該回の制御タイミング以前に算出された状態を並べて生成した共分散行列、その固有値、および将来に展開するための補間値を計算することで予測しておく(次回予測)。この次回予測の処理は制御タイミングごとに繰り返し実行される。制御タイミング1において、認識部130の出力する状態の定義は、物体の位置については暫定認識、速度については確定的に認識されたと定義される。
At control timing 1, the second recognition unit 134 provisionally recognizes the presence and position of an object for the first time. The second recognition unit 134 also recognizes the speed of the object. At this time, the second recognition unit 134 (or another functional unit) predicts the position and speed of the object at the next control timing 2, for example, by calculating a covariance matrix generated by arranging the states calculated before that control timing, its eigenvalues, and an interpolated value for future expansion (next prediction). This next prediction process is repeatedly executed at each control timing. At control timing 1, the state output by the recognition unit 130 is defined as tentatively recognized for the object's position and definitively recognized for its speed.
制御タイミング2において、第2認識部134が制御タイミング1における次回予測の結果と、入力された画像を処理した結果とに基づいて、物体の位置と速度を更新する。
At control timing 2, the second recognition unit 134 updates the object's position and speed based on the result of the next prediction at control timing 1 and the result of processing the input image.
制御タイミング3において、第2認識部134が制御タイミング2における次回予測の結果と、入力された画像を処理した結果とに基づいて、物体の位置と速度を更新する。また、制御タイミング3は、第1認識部132が初めて動作を行うタイミングである。この制御タイミング3において、第1認識部132が制御タイミング2における次回予測の結果と、入力された画像を処理した結果とに基づいて、物体の位置を認識するが、認識した位置を出力せずに、メモリ(不図示)に記憶させる。
At control timing 3, the second recognition unit 134 updates the object's position and speed based on the result of the next prediction at control timing 2 and the result of processing the input image. Control timing 3 is also the timing at which the first recognition unit 132 operates for the first time. At this control timing 3, the first recognition unit 132 recognizes the object's position based on the result of the next prediction at control timing 2 and the result of processing the input image, but does not output the recognized position, but stores it in memory (not shown).
制御タイミング4において、第2認識部134が制御タイミング3における次回予測の結果と、入力された画像を処理した結果とに基づいて、物体の位置と速度を更新する。
At control timing 4, the second recognition unit 134 updates the object's position and speed based on the result of the next prediction at control timing 3 and the result of processing the input image.
制御タイミング5において、第2認識部134が制御タイミング4における次回予測の結果と、入力された画像を処理した結果とに基づいて、物体の位置と速度を更新する。
At control timing 5, the second recognition unit 134 updates the object's position and speed based on the result of the next prediction at control timing 4 and the result of processing the input image.
制御タイミング6は、第1認識部132が初めて出力を行うタイミングである。この制御タイミング6において、第1認識部132が制御タイミング5における次回予測の結果と、入力された画像を処理した結果とに基づいて、物体の位置を更新する。このとき、物体の存在が確定し、物体の位置に関しても確定的に認識されたと定義される。また、この制御タイミング6においては、第1認識部132により認識された物体の位置の情報が蓄積されており、当該位置の情報に基づく速度の推定が可能となっている。このため、第1認識部132が以前に認識した物体の位置の認識結果(例えば、制御タイミング3における認識結果)と、入力された画像を処理した結果とに基づいて、物体の速度を更新する。
Control timing 6 is the timing when the first recognition unit 132 outputs for the first time. At this control timing 6, the first recognition unit 132 updates the object's position based on the result of the next prediction at control timing 5 and the result of processing the input image. At this time, the existence of the object is confirmed, and it is defined that the object's position has also been confirmed as being confirmed. Furthermore, at this control timing 6, information on the object's position recognized by the first recognition unit 132 is accumulated, making it possible to estimate the speed based on the information on that position. Therefore, the object's speed is updated based on the recognition result of the object's position previously recognized by the first recognition unit 132 (for example, the recognition result at control timing 3) and the result of processing the input image.
制御タイミング7において、第2認識部134が制御タイミング6における次回予測の結果と、入力された画像を処理した結果とに基づいて、物体の速度を更新する。さらに、第2認識部134が速度に基づいて物体の位置を更新する。例えば、第2認識部134は、線形補間により物体の位置を更新する。第2認識部134が、制御タイミング6において更新された物体の位置を基準位置として、この基準位置に、線形補間を用いて算出された物体の移動量(制御タイミング6から制御タイミング7までの移動量)を加算することで現在の物体の位置を推定し、推定した位置で物体の位置を更新する。
At control timing 7, the second recognition unit 134 updates the object's speed based on the result of the next prediction at control timing 6 and the result of processing the input image. Furthermore, the second recognition unit 134 updates the object's position based on the speed. For example, the second recognition unit 134 updates the object's position by linear interpolation. The second recognition unit 134 estimates the current object's position by setting the object's position updated at control timing 6 as a reference position and adding the object's movement amount calculated using linear interpolation (the movement amount from control timing 6 to control timing 7) to this reference position, and updates the object's position to the estimated position.
制御タイミング8において、第2認識部134が制御タイミング7における次回予測の結果と、入力された画像を処理した結果とに基づいて、物体の速度を更新する。さらに、第2認識部134が速度に基づいて物体の位置を更新する。例えば、第2認識部134は、線形補間により物体の位置を更新する。第2認識部134が、制御タイミング6において更新された物体の位置を基準位置として、この基準位置に、線形補間を用いて算出された物体の移動量(制御タイミング6から制御タイミング8までの移動量)を加算することで現在の物体の位置を推定し、推定した位置で物体の位置を更新する。あるいは、第2認識部134が、制御タイミング7において更新された物体の位置を基準位置として、この基準位置に、線形補間を用いて算出された物体の移動量(制御タイミング7から制御タイミング8までの移動量)を加算することで現在の物体の位置を推定し、推定した位置で物体の位置を更新する。
At control timing 8, the second recognition unit 134 updates the object's speed based on the result of the next prediction at control timing 7 and the result of processing the input image. Furthermore, the second recognition unit 134 updates the object's position based on the speed. For example, the second recognition unit 134 updates the object's position by linear interpolation. The second recognition unit 134 estimates the current object's position by taking the object's position updated at control timing 6 as a reference position and adding the object's movement amount calculated using linear interpolation (the movement amount from control timing 6 to control timing 8) to this reference position, and updates the object's position at the estimated position. Alternatively, the second recognition unit 134 estimates the current object's position by taking the object's position updated at control timing 7 as a reference position and adding the object's movement amount calculated using linear interpolation (the movement amount from control timing 7 to control timing 8) to this reference position, and updates the object's position at the estimated position.
制御タイミング9は、第1認識部132が二回目の出力を行うタイミングである。この制御タイミング9において、第1認識部132が制御タイミング8における次回予測の結果と、入力された画像を処理した結果とに基づいて、物体の位置を更新する。また、第1認識部132が以前に認識した物体の位置の認識結果(例えば、制御タイミング3および6における認識結果)と、入力された画像を処理した結果とに基づいて、物体の速度を更新する。これにより、フュージョン部136は、第1認識部132が出力を行うタイミングでは、第1認識部132が出力した位置および速度を物体の状態として出力する。以降、制御タイミング7~9と同様の処理が繰り返し実行される。
Control timing 9 is the timing at which the first recognition unit 132 performs the second output. At this control timing 9, the first recognition unit 132 updates the object's position based on the result of the next prediction at control timing 8 and the result of processing the input image. The first recognition unit 132 also updates the object's speed based on the recognition result of the object's position previously recognized by the first recognition unit 132 (for example, the recognition result at control timings 3 and 6) and the result of processing the input image. As a result, at the timing at which the first recognition unit 132 performs an output, the fusion unit 136 outputs the position and speed output by the first recognition unit 132 as the object's state. Thereafter, the same processes as those at control timings 7 to 9 are repeatedly executed.
以上説明した第3の実施形態によれば、物体の速度変化に速やかに追従することができ、ひいては迅速に移動体の挙動を制御することができる。さらに、第1認識部132により認識された物体の位置の情報が蓄積されて速度の推定が可能となった後、第1認識部132が動作する制御タイミングにおいては、第1認識部132により認識される速度に基づいて物体の速度が更新される。これにより、物体の位置および速度の推定精度を高めることができる。また、第1認識部132のみを備える既存システムに、第2認識部134を組み込む場合等における、設計の多様性を保つことができる。
According to the third embodiment described above, it is possible to quickly track changes in the speed of an object, and therefore to quickly control the behavior of the moving body. Furthermore, after information on the object's position recognized by the first recognition unit 132 is accumulated and it becomes possible to estimate the speed, at the control timing when the first recognition unit 132 operates, the speed of the object is updated based on the speed recognized by the first recognition unit 132. This makes it possible to improve the accuracy of estimating the object's position and speed. Furthermore, it is possible to maintain design diversity in cases such as when the second recognition unit 134 is incorporated into an existing system that only has the first recognition unit 132.
<第4の実施形態>
次に、第4の実施形態について説明する。第4の実施形態は、第1認識部132が初めて動作する制御タイミングにおいては、当該回の制御タイミングの前回の前回制御タイミングでの第2認識部134による認識結果と、当該回の制御タイミングでの第1認識部132による認識結果とに基づいて、物体の速度が更新される点において、第1から3の実施形態と異なる。以下の説明では、第1から3の実施形態と同様の機能を有する構成については、同一の符号および名称を付するものとし、その具体的な説明については省略する。 Fourth Embodiment
Next, a fourth embodiment will be described. The fourth embodiment differs from the first to third embodiments in that, at the control timing when thefirst recognition unit 132 operates for the first time, the speed of the object is updated based on the recognition result by the second recognition unit 134 at the previous control timing immediately preceding the control timing in question and the recognition result by the first recognition unit 132 at the control timing in question. In the following description, components having the same functions as those in the first to third embodiments are given the same reference numerals and names, and a detailed description thereof will be omitted.
次に、第4の実施形態について説明する。第4の実施形態は、第1認識部132が初めて動作する制御タイミングにおいては、当該回の制御タイミングの前回の前回制御タイミングでの第2認識部134による認識結果と、当該回の制御タイミングでの第1認識部132による認識結果とに基づいて、物体の速度が更新される点において、第1から3の実施形態と異なる。以下の説明では、第1から3の実施形態と同様の機能を有する構成については、同一の符号および名称を付するものとし、その具体的な説明については省略する。 Fourth Embodiment
Next, a fourth embodiment will be described. The fourth embodiment differs from the first to third embodiments in that, at the control timing when the
図9は、第4の実施形態に係るある場面における第1認識部132と第2認識部134の動作の一例を示す図である。本図において第2認識部134は、第1認識部132の3倍の周波数で処理を行うものとする。
FIG. 9 is a diagram showing an example of the operation of the first recognition unit 132 and the second recognition unit 134 in a certain scene according to the fourth embodiment. In this figure, the second recognition unit 134 performs processing at a frequency three times that of the first recognition unit 132.
制御タイミング1において、第2認識部134が初めて物体の存在および位置を暫定的に認識する。第2認識部134は、合わせて物体の速度も認識する。このとき、第2認識部134が(或いは他の機能部が)、次回の制御タイミング2における物体の位置および速度を、例えば当該回の制御タイミング以前に算出された状態を並べて生成した共分散行列、その固有値、および将来に展開するための補間値を計算することで予測しておく(次回予測)。この次回予測の処理は制御タイミングごとに繰り返し実行される。制御タイミング1において、認識部130の出力する状態の定義は、物体の位置については暫定認識、速度については確定的に認識されたと定義される。
At control timing 1, the second recognition unit 134 provisionally recognizes the presence and position of an object for the first time. The second recognition unit 134 also recognizes the speed of the object. At this time, the second recognition unit 134 (or another functional unit) predicts the position and speed of the object at the next control timing 2, for example, by calculating a covariance matrix generated by arranging the states calculated before that control timing, its eigenvalues, and an interpolated value for future expansion (next prediction). This next prediction process is repeatedly executed at each control timing. At control timing 1, the state output by the recognition unit 130 is defined as tentatively recognized for the object's position and definitively recognized for its speed.
制御タイミング2において、第2認識部134が制御タイミング1における次回予測の結果と、入力された画像を処理した結果とに基づいて、物体の位置と速度を更新する。
At control timing 2, the second recognition unit 134 updates the object's position and speed based on the result of the next prediction at control timing 1 and the result of processing the input image.
制御タイミング3は、第1認識部132が初めて出力を行うタイミングである。この制御タイミング3において、第1認識部132が制御タイミング2における次回予測の結果と、入力された画像を処理した結果とに基づいて、物体の位置を更新する。このとき、物体の存在が確定し、物体の位置に関しても確定的に認識されたと定義される。また、第1認識部132が制御タイミング2における第2認識部134による認識結果(前回の高速物体認識結果の位置)と、制御タイミング3における第1認識部132による認識結果(今回の通常物体認識結果の位置)とに基づいて、物体の速度を更新する。これにより、フュージョン部136は、第1認識部132が初めて出力を行うタイミングでは、第1認識部132が出力した位置と、第1認識部132が出力した位置およびタイミングの前の前回タイミングで第2認識部134が出力した位置に基づいて第1認識部132が出力した速度とを物体の状態として出力する。
Control timing 3 is the timing when the first recognition unit 132 outputs for the first time. At this control timing 3, the first recognition unit 132 updates the object's position based on the result of the next prediction at control timing 2 and the result of processing the input image. At this time, it is defined that the existence of the object is confirmed and the object's position is also confirmed. The first recognition unit 132 updates the object's speed based on the recognition result by the second recognition unit 134 at control timing 2 (the position of the previous high-speed object recognition result) and the recognition result by the first recognition unit 132 at control timing 3 (the position of the current normal object recognition result). As a result, at the timing when the first recognition unit 132 outputs for the first time, the fusion unit 136 outputs, as the state of the object, the position output by the first recognition unit 132 and the speed output by the first recognition unit 132 based on the position output by the first recognition unit 132 and the position output by the second recognition unit 134 at the previous timing before that timing.
制御タイミング4において、第2認識部134が制御タイミング3における次回予測の結果と、入力された画像を処理した結果とに基づいて、物体の速度を更新する。さらに、第2認識部134が速度に基づいて物体の位置を更新する。例えば、第2認識部134は、線形補間により物体の位置を更新する。第2認識部134が、制御タイミング3において第1認識部132により認識された物体の位置を基準位置として、この基準位置に、線形補間を用いて算出された物体の移動量(制御タイミング3から制御タイミング4までの移動量)を加算することで現在の物体の位置を推定し、推定した位置で物体の位置を更新する。
At control timing 4, the second recognition unit 134 updates the object's speed based on the result of the next prediction at control timing 3 and the result of processing the input image. Furthermore, the second recognition unit 134 updates the object's position based on the speed. For example, the second recognition unit 134 updates the object's position by linear interpolation. The second recognition unit 134 estimates the current object's position by setting the object's position recognized by the first recognition unit 132 at control timing 3 as a reference position and adding the object's movement amount calculated using linear interpolation (the movement amount from control timing 3 to control timing 4) to this reference position, and updates the object's position to the estimated position.
制御タイミング5において、第2認識部134が制御タイミング4における次回予測の結果と、入力された画像を処理した結果とに基づいて、物体の速度を更新する。さらに、第2認識部134が速度に基づいて物体の位置を更新する。例えば、第2認識部134は、線形補間により物体の位置を更新する。第2認識部134が、制御タイミング3において更新された物体の位置を基準位置として、この基準位置に、線形補間を用いて算出された物体の移動量(制御タイミング3から制御タイミング5までの移動量)を加算することで現在の物体の位置を推定し、推定した位置で物体の位置を更新する。あるいは、第2認識部134が、制御タイミング4において更新された物体の位置を基準位置として、この基準位置に、線形補間を用いて算出された物体の移動量(制御タイミング4から制御タイミング5までの移動量)を加算することで現在の物体の位置を推定し、推定した位置で物体の位置を更新する。
At control timing 5, the second recognition unit 134 updates the object's speed based on the result of the next prediction at control timing 4 and the result of processing the input image. Furthermore, the second recognition unit 134 updates the object's position based on the speed. For example, the second recognition unit 134 updates the object's position by linear interpolation. The second recognition unit 134 estimates the current object's position by taking the object's position updated at control timing 3 as a reference position and adding the object's movement amount calculated using linear interpolation (movement amount from control timing 3 to control timing 5) to this reference position, and updates the object's position at the estimated position. Alternatively, the second recognition unit 134 estimates the current object's position by taking the object's position updated at control timing 4 as a reference position and adding the object's movement amount calculated using linear interpolation (movement amount from control timing 4 to control timing 5) to this reference position, and updates the object's position at the estimated position.
制御タイミング6は、第1認識部132が2回目以降の出力を行うタイミングである。この制御タイミング6において、第1認識部132が制御タイミング5における次回予測の結果と、入力された画像を処理した結果とに基づいて、物体の位置を更新する。また、この制御タイミング6においては、第1認識部132により認識された物体の位置の情報が蓄積されており、当該位置の情報に基づく速度の推定が可能となっている。このため、第1認識部132が以前に認識した物体の位置の認識結果(例えば、制御タイミング3における認識結果)と、入力された画像を処理した結果とに基づいて、物体の速度を更新する。以降、制御タイミング4~6と同様の処理が繰り返し実行される。
Control timing 6 is the timing at which the first recognition unit 132 performs the second or subsequent output. At this control timing 6, the first recognition unit 132 updates the object's position based on the result of the next prediction at control timing 5 and the result of processing the input image. Also, at this control timing 6, information on the object's position recognized by the first recognition unit 132 is accumulated, making it possible to estimate the speed based on the position information. Therefore, the first recognition unit 132 updates the object's speed based on the recognition result of the object's position previously recognized (for example, the recognition result at control timing 3) and the result of processing the input image. Thereafter, the same processing as at control timings 4 to 6 is repeatedly executed.
以上説明した第4の実施形態によれば、物体の速度変化に速やかに追従することができ、ひいては迅速に移動体の挙動を制御することができる。また、第1認識部132が初めて動作する制御タイミングにおいては、前回タイミングでの第2認識部134による認識結果と、当該回の制御タイミングでの第1認識部132による認識結果とに基づいて、物体の速度が更新される。これにより、第1認識部132のみを備える既存システムに、第2認識部134を組み込む場合等における、設計の多様性を保つことができる。
According to the fourth embodiment described above, it is possible to quickly track changes in the speed of an object, and therefore to quickly control the behavior of the moving object. Furthermore, at the control timing when the first recognition unit 132 operates for the first time, the speed of the object is updated based on the recognition result by the second recognition unit 134 at the previous timing and the recognition result by the first recognition unit 132 at the current control timing. This makes it possible to maintain design diversity when, for example, incorporating the second recognition unit 134 into an existing system equipped only with the first recognition unit 132.
なお、上記の説明において、制御装置は、車両M(つまりは移動体)に搭載されるものとしたが、これに限らず、移動体から離れた場所に設置され、通信によってカメラ10やレーダ装置12などの出力データを取得すると共に、駆動指示信号を移動体に送信するもの、つまり移動体を遠隔制御するものであってもよい。
In the above description, the control device is described as being mounted on the vehicle M (i.e., the moving body), but this is not limited thereto. The control device may be installed at a location away from the moving body, and may acquire output data from the camera 10, radar device 12, etc. through communication and transmit drive instruction signals to the moving body, i.e., may remotely control the moving body.
また、上記において説明した実施形態は一例であり、本発明はこれらの実施形態の構成に限定されるものではない。各実施形態に含まれる機能または構成を適宜組み合わせることも可能である。例えば、第2から第4の実施形態において説明したような、第1認識部132により物体の位置が認識された後、第1認識部132が動作しない制御タイミングにおいて、第2認識部134により認識された速度に基づいて物体の位置が更新される構成を、第1の実施形態に組み入れることも可能である。
Furthermore, the embodiments described above are merely examples, and the present invention is not limited to the configurations of these embodiments. It is also possible to combine the functions or configurations included in each embodiment as appropriate. For example, as described in the second to fourth embodiments, it is also possible to incorporate into the first embodiment a configuration in which, after the position of an object is recognized by the first recognition unit 132, the position of the object is updated based on the speed recognized by the second recognition unit 134 at a control timing when the first recognition unit 132 is not operating.
上記説明した実施形態は、以下のように表現することができる。
移動体の周辺状況を検知するための検知デバイスの出力に基づいて、前記移動体の周辺に存在する物体の位置および速度を認識する認識装置であって、
コンピュータによって読み込み可能な命令(computer-readable instructions)を格納する一以上の記憶媒体(storage medium)と、
前記一以上の記憶媒体に接続されたプロセッサと、を備え、
前記プロセッサは、前記コンピュータによって読み込み可能な命令を実行することにより(the processor executing the computer-readable instructions to:)、
前記物体の位置を認識するための処理を行った結果である前記物体の位置を第1周期で繰り返し出力し、
前記物体の速度を認識するための処理を行った結果である前記物体の速度を前記第1周期よりも短い第2周期で繰り返し出力する、
制御装置。 The above-described embodiment can be expressed as follows.
A recognition device that recognizes a position and a speed of an object present around a moving body based on an output of a detection device for detecting a surrounding situation of the moving body,
one or more storage media storing computer-readable instructions;
a processor coupled to the one or more storage media;
The processor executes the computer-readable instructions to:
repeatedly outputting the position of the object, which is a result of performing processing for recognizing the position of the object, in a first period;
repeatedly outputting the velocity of the object, which is a result of performing a process for recognizing the velocity of the object, at a second period shorter than the first period;
Control device.
移動体の周辺状況を検知するための検知デバイスの出力に基づいて、前記移動体の周辺に存在する物体の位置および速度を認識する認識装置であって、
コンピュータによって読み込み可能な命令(computer-readable instructions)を格納する一以上の記憶媒体(storage medium)と、
前記一以上の記憶媒体に接続されたプロセッサと、を備え、
前記プロセッサは、前記コンピュータによって読み込み可能な命令を実行することにより(the processor executing the computer-readable instructions to:)、
前記物体の位置を認識するための処理を行った結果である前記物体の位置を第1周期で繰り返し出力し、
前記物体の速度を認識するための処理を行った結果である前記物体の速度を前記第1周期よりも短い第2周期で繰り返し出力する、
制御装置。 The above-described embodiment can be expressed as follows.
A recognition device that recognizes a position and a speed of an object present around a moving body based on an output of a detection device for detecting a surrounding situation of the moving body,
one or more storage media storing computer-readable instructions;
a processor coupled to the one or more storage media;
The processor executes the computer-readable instructions to:
repeatedly outputting the position of the object, which is a result of performing processing for recognizing the position of the object, in a first period;
repeatedly outputting the velocity of the object, which is a result of performing a process for recognizing the velocity of the object, at a second period shorter than the first period;
Control device.
以上、本発明を実施するための形態について実施形態を用いて説明したが、本発明はこうした実施形態に何等限定されるものではなく、本発明の要旨を逸脱しない範囲内において種々の変形及び置換を加えることができる。
Although the above describes the form for carrying out the present invention using an embodiment, the present invention is in no way limited to such an embodiment, and various modifications and substitutions can be made without departing from the spirit of the present invention.
10 カメラ
12 レーダ装置
14 LIDAR
16 物体認識装置
100 自動運転制御装置
120 第1制御部
130 認識部
132 第1認識部
134 第2認識部
136 フュージョン部
138 将来位置予測/リスク設定部
140 行動計画生成部
160 第2制御部 10Camera 12 Radar device 14 LIDAR
16Object recognition device 100 Automatic driving control device 120 First control unit 130 Recognition unit 132 First recognition unit 134 Second recognition unit 136 Fusion unit 138 Future position prediction/risk setting unit 140 Action plan generation unit 160 Second control unit
12 レーダ装置
14 LIDAR
16 物体認識装置
100 自動運転制御装置
120 第1制御部
130 認識部
132 第1認識部
134 第2認識部
136 フュージョン部
138 将来位置予測/リスク設定部
140 行動計画生成部
160 第2制御部 10
16
Claims (14)
- 移動体の周辺状況を検知するための検知デバイスの出力に基づいて、前記移動体の周辺に存在する物体の位置および速度を認識する認識装置であって、
前記物体の位置を認識するための処理を行った結果である前記物体の位置を第1周期で繰り返し出力する第1認識部と、
前記物体の速度を認識するための処理を行った結果である前記物体の速度を前記第1周期よりも短い第2周期で繰り返し出力する第2認識部と、
を備える認識装置。 A recognition device that recognizes a position and a speed of an object present around a moving body based on an output of a detection device for detecting a surrounding situation of the moving body,
a first recognition unit that repeatedly outputs the position of the object, which is a result of performing a process for recognizing the position of the object, in a first period;
a second recognition unit that repeatedly outputs the velocity of the object, which is a result of performing a process for recognizing the velocity of the object, at a second period that is shorter than the first period;
A recognition device comprising: - 前記第2認識部は、前記第1認識部により位置が出力されていない物体の位置を暫定的に認識する機能を有し、
前記第1認識部から位置が、前記第2認識部から速度が、それぞれ出力されている場合、前記第1認識部が出力した位置と前記第2認識部が出力した速度を当該物体の状態として出力し、
前記第1認識部から位置が出力されておらず、前記第2認識部から位置および速度が出力されている物体について、前記第2認識部が出力した位置および速度を当該物体の状態として出力するフュージョン部を更に備える、
請求項1記載の認識装置。 the second recognition unit has a function of provisionally recognizing a position of an object whose position has not been output by the first recognition unit,
When a position is output from the first recognition unit and a velocity is output from the second recognition unit, the position output from the first recognition unit and the velocity output from the second recognition unit are output as a state of the object,
a fusion unit that outputs the position and velocity output by the second recognition unit as a state of an object for which the position is not output from the first recognition unit and the position and velocity are output from the second recognition unit,
The recognition device according to claim 1. - 前記第2認識部は、前記第1認識部により位置が出力されていない物体の存在を認識する機能を有する、
請求項2記載の認識装置。 The second recognition unit has a function of recognizing the presence of an object whose position has not been output by the first recognition unit.
The recognition device according to claim 2. - 前記第1認識部と前記第2認識部は、少なくとも一部が共通する処理手順を実行することで、それぞれ処理を行うものである、
請求項1記載の認識装置。 The first recognition unit and the second recognition unit each perform a process by executing a process procedure that is at least partially common to each other.
The recognition device according to claim 1. - 前記第1認識部と前記第2認識部のそれぞれは、一または複数のプロセッサが、前記第1認識部としての処理と前記第2認識部としての処理を時分割で行うことで実現される、
請求項1記載の認識装置。 Each of the first recognition unit and the second recognition unit is realized by one or a plurality of processors performing a process as the first recognition unit and a process as the second recognition unit in a time-division manner.
The recognition device according to claim 1. - 前記第1認識部と前記第2認識部のそれぞれは、別体のプロセッサが処理を行うことで実現される、
請求項1記載の認識装置。 The first recognition unit and the second recognition unit are each realized by a separate processor performing processing.
The recognition device according to claim 1. - 前記第1認識部は、前記物体の位置および速度を前記第1周期で繰り返し出力する機能を有し、
前記第2認識部は、前記物体の位置および速度を前記第2周期で繰り返し出力する機能を有し、
前記第1認識部が初めて出力を行うタイミングでは、前記第1認識部が出力した位置と前記第2認識部が出力した速度を前記物体の状態として出力するフュージョン部を更に備える、
請求項1記載の認識装置。 the first recognition unit has a function of repeatedly outputting a position and a velocity of the object in the first period;
the second recognition unit has a function of repeatedly outputting the position and velocity of the object in the second period;
and a fusion unit that outputs the position output by the first recognition unit and the velocity output by the second recognition unit as a state of the object at a timing when the first recognition unit performs an output for the first time.
The recognition device according to claim 1. - 前記フュージョン部は、前記第1認識部が2回目以降の出力を行うタイミングでは、前記第1認識部が出力した位置および速度を前記物体の状態として出力する、
請求項7記載の認識装置。 The fusion unit outputs the position and velocity output by the first recognition unit as the state of the object at a timing when the first recognition unit performs a second or subsequent output.
The recognition device according to claim 7. - 前記第1認識部は、前記物体の位置および速度を前記第1周期で繰り返し出力する機能を有し、
前記第1認識部が出力を行うタイミングでは、前記第1認識部が出力した位置および速度を前記物体の状態として出力するフュージョン部を更に備える、
請求項1記載の認識装置。 the first recognition unit has a function of repeatedly outputting a position and a velocity of the object in the first period;
and a fusion unit that outputs the position and velocity output by the first recognition unit as a state of the object at a timing when the first recognition unit outputs the position and velocity.
The recognition device according to claim 1. - 前記第1認識部は、前記物体の位置および速度を前記第1周期で繰り返し出力する機能を有し、
前記第2認識部は、前記物体の位置および速度を前記第2周期で繰り返し出力する機能を有し、
前記第1認識部が初めて出力を行うタイミングでは、前記第1認識部が出力した位置と、前記第1認識部が出力した位置および前記タイミングの前の前回タイミングで前記第2認識部が出力した位置に基づいて前記第1認識部が出力した速度とを前記物体の状態として出力するフュージョン部を更に備える、
請求項1記載の認識装置。 the first recognition unit has a function of repeatedly outputting a position and a velocity of the object in the first period;
the second recognition unit has a function of repeatedly outputting the position and velocity of the object in the second period;
a fusion unit that outputs, at a timing when the first recognition unit performs an output for the first time, a position output by the first recognition unit, and a velocity output by the first recognition unit based on the position output by the first recognition unit and the position output by the second recognition unit at a previous timing before the timing, as a state of the object.
The recognition device according to claim 1. - 前記第1認識部が第1タイミングで第1位置の出力を行った後、前記第1認識部が出力を行わず且つ前記第2認識部が出力を行う第2タイミングにおいて、前記第2認識部は、過去に認識された前記物体の位置および速度の履歴情報に基づく線形補間を行うことにより、前記第1位置を更新する、
請求項1から10のいずれか一項記載の認識装置。 After the first recognition unit outputs the first position at a first timing, at a second timing when the first recognition unit does not output and the second recognition unit outputs, the second recognition unit updates the first position by performing linear interpolation based on history information of positions and velocities of the object recognized in the past.
A recognition device according to any one of claims 1 to 10. - 請求項1記載の認識装置と、
前記認識装置が状態を出力した物体への接近を回避するように前記移動体を移動させる運転制御部と、
を備える移動体の制御装置。 A recognition device according to claim 1;
a driving control unit that moves the moving body so as to avoid approaching an object whose state is output by the recognition device;
A control device for a moving object comprising: - 移動体の周辺状況を検知するための検知デバイスの出力に基づいて、前記移動体の周辺に存在する物体の位置および速度を認識する認識装置を用いて実行される認識方法であって、
前記物体の位置を認識するための処理を行った結果である前記物体の位置を第1周期で繰り返し出力することと、
前記物体の速度を認識するための処理を行った結果である前記物体の速度を前記第1周期よりも短い第2周期で繰り返し出力することと、
を備える認識方法。 A recognition method executed by a recognition device that recognizes positions and velocities of objects present around a moving body based on an output of a detection device for detecting a surrounding situation of the moving body, the method comprising:
repeatedly outputting the position of the object, which is a result of performing processing for recognizing the position of the object, in a first period;
repeatedly outputting the velocity of the object, which is a result of performing a process for recognizing the velocity of the object, at a second period shorter than the first period;
The recognition method includes: - 移動体の周辺状況を検知するための検知デバイスの出力に基づいて、前記移動体の周辺に存在する物体の位置および速度を認識する認識装置のプロセッサに、
前記物体の位置を認識するための処理を行った結果である前記物体の位置を第1周期で繰り返し出力することと、
前記物体の速度を認識するための処理を行った結果である前記物体の速度を前記第1周期よりも短い第2周期で繰り返し出力することと、
を実行させるためのプログラム。 A processor of a recognition device that recognizes positions and speeds of objects present around a moving object based on an output of a detection device for detecting a surrounding situation of the moving object,
repeatedly outputting the position of the object, which is a result of performing processing for recognizing the position of the object, in a first period;
repeatedly outputting the velocity of the object, which is a result of performing processing for recognizing the velocity of the object, at a second period shorter than the first period;
A program for executing.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2023/012948 WO2024201837A1 (en) | 2023-03-29 | 2023-03-29 | Recognition device, mobile body control device, recognition method, and program |
JPPCT/JP2023/012948 | 2023-03-29 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024204585A1 true WO2024204585A1 (en) | 2024-10-03 |
Family
ID=92903680
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2023/012948 WO2024201837A1 (en) | 2023-03-29 | 2023-03-29 | Recognition device, mobile body control device, recognition method, and program |
PCT/JP2024/012743 WO2024204585A1 (en) | 2023-03-29 | 2024-03-28 | Recognition device, moving body control device, recognition method, and program |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2023/012948 WO2024201837A1 (en) | 2023-03-29 | 2023-03-29 | Recognition device, mobile body control device, recognition method, and program |
Country Status (1)
Country | Link |
---|---|
WO (2) | WO2024201837A1 (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008082871A (en) * | 2006-09-27 | 2008-04-10 | Mazda Motor Corp | Obstacle detector for vehicle |
WO2018134941A1 (en) * | 2017-01-19 | 2018-07-26 | 本田技研工業株式会社 | Vehicle control system, vehicle control method, and vehicle control program |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3867505B2 (en) * | 2001-03-19 | 2007-01-10 | 日産自動車株式会社 | Obstacle detection device |
JP4144538B2 (en) * | 2003-11-07 | 2008-09-03 | 日産自動車株式会社 | VEHICLE DRIVE OPERATION ASSISTANCE DEVICE AND VEHICLE WITH VEHICLE DRIVE OPERATION ASSISTANCE DEVICE |
KR102458664B1 (en) * | 2018-03-08 | 2022-10-25 | 삼성전자주식회사 | Electronic apparatus and method for assisting driving of a vehicle |
-
2023
- 2023-03-29 WO PCT/JP2023/012948 patent/WO2024201837A1/en unknown
-
2024
- 2024-03-28 WO PCT/JP2024/012743 patent/WO2024204585A1/en unknown
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008082871A (en) * | 2006-09-27 | 2008-04-10 | Mazda Motor Corp | Obstacle detector for vehicle |
WO2018134941A1 (en) * | 2017-01-19 | 2018-07-26 | 本田技研工業株式会社 | Vehicle control system, vehicle control method, and vehicle control program |
Also Published As
Publication number | Publication date |
---|---|
WO2024201837A1 (en) | 2024-10-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6627153B2 (en) | Vehicle control device, vehicle control method, and program | |
CN111201170B (en) | Vehicle control device and vehicle control method | |
JP6859239B2 (en) | Vehicle control devices, vehicle control methods, and programs | |
CN113460077B (en) | Moving object control device, moving object control method, and storage medium | |
JP7406432B2 (en) | Mobile object control device, mobile object control method, and program | |
CN112677966B (en) | Vehicle control device, vehicle control method, and storage medium | |
JP7444680B2 (en) | Mobile object control device, mobile object control method, and program | |
CN112677967B (en) | Vehicle control device, vehicle control method, and storage medium | |
CN112141097B (en) | Vehicle control device, vehicle control method, and storage medium | |
US11932283B2 (en) | Vehicle control device, vehicle control method, and storage medium | |
WO2024204585A1 (en) | Recognition device, moving body control device, recognition method, and program | |
CN115071693A (en) | Control device for moving body, control method for moving body, and storage medium | |
JP2022142976A (en) | Movable body control device, movable body control method and program | |
JP7345515B2 (en) | Vehicle control device, vehicle control method, and program | |
US20240182024A1 (en) | Vehicle control device, vehicle control method, and storage medium | |
US11840222B2 (en) | Vehicle control method, vehicle control device, and storage medium | |
JP7448400B2 (en) | Mobile object control device, mobile object control method, and program | |
JP7429555B2 (en) | Vehicle control device, vehicle control method, and program | |
JP2024039776A (en) | Mobile control device, mobile control method, and program | |
JP2022142863A (en) | Movable body control device, movable body control method and program | |
JP2024130739A (en) | Control device, control method, and program | |
JP2023112396A (en) | Mobile body control device, mobile body control method, and program |