WO2020071080A1 - 情報処理装置、制御方法及びプログラム - Google Patents
情報処理装置、制御方法及びプログラムInfo
- Publication number
- WO2020071080A1 WO2020071080A1 PCT/JP2019/035825 JP2019035825W WO2020071080A1 WO 2020071080 A1 WO2020071080 A1 WO 2020071080A1 JP 2019035825 W JP2019035825 W JP 2019035825W WO 2020071080 A1 WO2020071080 A1 WO 2020071080A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- manipulator
- transfer
- unit
- information processing
- sensor
- Prior art date
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J13/00—Controls for manipulators
- B25J13/08—Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
- B25J13/081—Touching devices, e.g. pressure-sensitive
- B25J13/082—Grasping-force detectors
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J13/00—Controls for manipulators
- B25J13/08—Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1612—Programme controls characterised by the hand, wrist, grip control
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J11/00—Manipulators not otherwise provided for
- B25J11/0005—Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J13/00—Controls for manipulators
- B25J13/02—Hand grip control means
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J13/00—Controls for manipulators
- B25J13/08—Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
- B25J13/085—Force or torque sensors
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J5/00—Manipulators mounted on wheels or on carriages
- B25J5/007—Manipulators mounted on wheels or on carriages mounted on wheels
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1628—Programme controls characterised by the control loop
- B25J9/163—Programme controls characterised by the control loop learning, adaptive, model based, rule based expert control
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1628—Programme controls characterised by the control loop
- B25J9/1633—Programme controls characterised by the control loop compliant, force, torque control, e.g. combined with position control
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1661—Programme controls characterised by programming, planning systems for manipulators characterised by task planning, object-oriented languages
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/40—Robotics, robotics mapping to robotics vision
- G05B2219/40202—Human robot coexistence
Definitions
- the present disclosure relates to an information processing device, a control method, and a program.
- the present disclosure proposes an information processing device, a control method, and a program that enable smooth delivery of an object.
- the information processing apparatus when delivering an object gripped by a manipulator to a recipient, the moving speed of the object maintains continuity, A control unit for controlling the manipulator is provided.
- the manipulator when an object is transferred to a transfer target, the manipulator is controlled such that continuity of the moving speed of the object in the transfer direction is maintained. You. This can reduce a sudden change in the displacement of the object in the transfer direction, so that the object can be smoothly transferred.
- FIG. 6 is a graph showing an example of an optimal transfer operation in one embodiment.
- 9 is a graph illustrating an example of a non-optimal transfer operation in one embodiment.
- FIG. 1 is a block diagram illustrating a schematic configuration example of an autonomous robot according to one embodiment. It is an outline view showing an example of composition of a hand concerning one embodiment.
- FIG. 6 is an external view showing a configuration example when a hand is viewed from a direction A2 in FIG. 5.
- FIG. 3 is a block diagram illustrating an example of a functional configuration for executing a transfer operation of the autonomous robot according to the embodiment.
- FIG. 9 is a flowchart illustrating a schematic operation example according to an embodiment.
- 6 is a flowchart illustrating a transfer operation according to a first example of one embodiment. It is a flowchart which shows the transfer operation
- 14 is a flowchart illustrating a transfer operation according to a fourth example of one embodiment. It is a figure for explaining blending of a grip operation and a release operation concerning one embodiment.
- FIG. 1 is a schematic diagram illustrating an example of a slip sensor configured using a vision sensor according to one embodiment.
- FIG. 17 is a diagram illustrating an example of a deformation unit illustrated in FIG. 16. It is a figure for explaining the initial slip detected by the vision sensor concerning one embodiment (the 1). It is a figure for explaining the initial slip detected by the vision sensor concerning one embodiment (the 2). It is a figure for explaining the initial slip detected by the vision sensor concerning one embodiment (the 3). It is a mimetic diagram showing an example of a slip sensor constituted using a pressure distribution sensor concerning one embodiment.
- FIG. 1 is a block diagram illustrating an example of a system configuration according to an embodiment.
- an information processing apparatus a control method, and a program according to an embodiment of the present disclosure will be described in detail with reference to the drawings.
- an autonomous robot having an arm (also referred to as a manipulator), such as a robot hand, a humanoid robot, and a pet robot, to exchange an unknown object with a human.
- the information processing device, the control method, and the program will be described with examples.
- FIG. 1 is a schematic diagram for explaining an operation of transferring an object from an autonomous robot according to the present embodiment to a person (hereinafter, referred to as a recipient).
- the autonomous robot 1 is, for example, a humanoid robot including a head 41, a body 42, a carriage 43, and manipulators 44L and 44R.
- the body portion 42 and the bogie portion 43 constitute a movable body that can move on, for example, a floor P1.
- the bogie unit 43 accommodates, for example, a traveling motor such as wheels and tracks, a traveling motor, a battery, and a control unit.
- the traveling mechanism is not limited to wheels, caterpillars, and the like, and may be a walking mechanism configured with two or more legs or the like.
- the autonomous robot 1 is not limited to a humanoid robot, and various robots having at least one arm unit, such as a manipulator alone or an autonomous mobile body equipped with a manipulator, can be applied as the autonomous robot 1. .
- Each of the manipulators 44L and 44R (hereinafter, when the manipulators 44L and 44R are not distinguished, the symbol is simply referred to as 44) includes an upper arm 441 attached to a position corresponding to the shoulder of the body 42 and an elbow of the manipulator 44.
- a forearm 442 attached to the upper arm 441 at a corresponding location, and a hand 443 attached to the forearm 442 at a location corresponding to the wrist of the manipulator 44.
- the upper arm 441 and the forearm 442 constitute, for example, an arm of the manipulator 44.
- Each joint corresponding to the shoulder, elbow, and wrist of the manipulator 44 is provided with, for example, a drive unit and a joint mechanism for moving the manipulator 44 like a human arm.
- As the drive unit for example, an electromagnetic motor, a hydraulic actuator, a pneumatic actuator, or the like can be used.
- the hand 443 is provided with a finger as a mechanism for gripping the object.
- a finger as a mechanism for gripping the object.
- the hand portion 443 has two fingers is illustrated, but the present invention is not limited to this.
- various modifications such as three fingers, four fingers, and five fingers are possible.
- a hand using a jamming transition, a hand using suction by air pressure control, or the like can be used as the hand portion 443.
- the transfer operation of an object between a human and an autonomous robot includes a gripping operation in which the autonomous robot grips the object, a release operation in which the autonomous robot releases the object, and a receiving operation in which the human receives the object.
- the autonomous robot 1 releases the object gripped by the gripping operation by a release operation. That is, when transferring the grasped object to a person, the autonomous robot 1 changes the blend ratio of the grasping operation and the release operation to gradually increase the ratio of the release operation, and finally releases the object.
- an optimal release operation differs for each object and each recipient.
- an optimal release operation differs between a case where a cup of hot water is exchanged and a case where a tennis ball is exchanged.
- an environment in which an object to be transferred is unknown, such as a home, a nursing care facility, or a store in which a human and an autonomous robot coexist, or an environment in which the behavior and situation of the recipient are unknown, that is,
- an optimal release operation model cannot be created in advance, a high-quality physical interaction between a human and an autonomous robot, specifically, a configuration and operation that enables the transfer of an object will be described with specific examples. explain.
- the optimal transfer operation may be, for example, that the rate of change of the slip amount in the transfer direction when transferring an object maintains continuity (for example, see FIG. 2).
- the present invention is not limited to this, and various types of transfer operations in which the transfer target person can receive the object from the autonomous robot 1 without stress can be defined as optimal transfer operations.
- an inappropriate transfer operation is, for example, generation of an unnecessarily large force when the transfer target receives an object from the autonomous robot 1, deformation or damage of the object, or jumping out of the content when receiving the object.
- the operation may be an operation that leads to an event such as generation of an inducing force.
- the change rate of the gripping force of the hand 443 and information detected by various sensors mounted on the hand 443 are input and the continuity of the moving speed of the object B1 in the transfer direction A1 is determined.
- the smooth delivery of the object B1 according to the characteristics (static friction coefficient, dynamic friction coefficient, mass, shape, size, rigidity, strength, temperature, humidity, etc.) of the object B1 and the actions and conditions of the person to be transferred is achieved. Can be made possible.
- a change in the slip amount or the initial slip amount of the object B1 gripped by the hand 443 is measured, and the change is in a component in a direction different from the direction of gravity, for example, in a direction opposite to gravity
- the release operation is started when a component in the direction opposite to the rotational moment due to or the component of the transfer direction A1 (for example, the position direction of the hand H1 of the transfer target person) is included.
- the release operation can be started at the initial stage of the transfer operation, so that the load applied to the object B1 and the fluctuation of the gripping force of the hand 443 can be minimized, thereby making the operation smoother. It is possible to deliver a simple object B1.
- the presence of the person to be given / received and changes in emotion obtained from image input information, voice input information, etc. May be added to the output.
- the release operation may be started after confirming the existence of the transfer target person and the existence of the hand H1 using the result of the image processing and the information from the distance measurement sensor.
- the existence of the person to be given or received may be confirmed using not only the result of the image processing and the information from the distance measuring sensor but also, for example, voice input / output information.
- the continuity of the moving speed of the object B1 in the transfer direction A1 is maintained.
- the release operation may be controlled.
- abrupt changes in the displacement of the object B1 in the transfer direction A1 can be reduced, so that the object B1 can be transferred more smoothly.
- the amount of slip of the object B1 in the direction of gravity or the initial amount of slip may be continuously measured. Accordingly, it is possible to reduce the possibility that the object B1 is accidentally dropped or the object B1 is unnaturally displaced up and down in the hand portion 443.
- the transfer direction A1 is, for example, a carpal bone on a line connecting the upper end of the third finger metacarpal and the carpal bone after recognizing the position, posture and shape of the hand H1 of the person to be transferred. Is defined as a direction. Further, the transfer direction A1 can be defined as a sternum direction on a line connecting the center of the carpal bone and the center of the sternum after recognizing the position and shape of the hand H1 of the person to be transferred and the trunk. .
- the transfer direction A1 as the direction of the humeral head on a line connecting the center of the carpal bone and the center of the humeral head after recognizing the position and shape of the hand H1 of the person to be transferred and the trunk. is there.
- the definition of the transfer direction A1 is not limited to the above, and can be variously changed.
- the release operation may include grip force control of the hand 443, operation control of an arm of the manipulator 44 (hereinafter referred to as arm operation control), control of the whole body operation of the autonomous robot 1, and the like.
- the grip force control of the hand 443, the arm operation control of the arm of the manipulator 44, the whole body operation control of the autonomous robot 1, and the like are blended at a certain blend ratio.
- the blend ratio is evaluated.
- the rate of change of the gripping force given to the hand 443, information detected by various sensors mounted on the hand 443, etc. Is used as an input, and machine learning that outputs the continuity of the moving speed of the object B1 in the transfer direction A1 is used.
- the gripping force control is to control the amount of change in the force per unit time when the gripping force generated in the hand portion 443 for gripping the object B1 is reduced toward release. May be.
- the arm operation control may be to move the position of the object B1 in the transfer direction A1 by holding the object B1 and changing the posture of the arm unit on which the object B1 is arranged in the target coordinate space. At that time, the amount of change per unit time may be controlled. With such arm operation control, it is possible to reduce the inertial force in the transfer direction A1 generated in the object B1 when the object B1 is immediately released from the hand 443 when the start of the release operation is detected.
- the robot whole body motion control is to change the position and the posture of the autonomous robot 1 holding the object B1 and arranging the object B1 in the target coordinate space to move the position of the object B1 in the transfer direction A1. It may be to control the amount of position change per unit time at that time.
- FIG. 4 is a block diagram illustrating a schematic configuration example of the autonomous robot according to the present embodiment.
- the autonomous robot 1 includes, for example, a CPU (Central Processing Unit) 12, a DRAM (Dynamic Random Access Memory) 13, a flash ROM (Read Only Memory) 14, and a PC (Personal Computer) card interface (I / F) a control unit 10 formed by interconnecting the wireless communication unit 16 and the signal processing circuit 11 via an internal bus 17; and a battery 18 as a power source of the autonomous robot 1.
- a CPU Central Processing Unit
- DRAM Dynamic Random Access Memory
- flash ROM Read Only Memory
- PC Personal Computer
- the autonomous robot 1 has movable parts such as a joint part of the manipulator 44 and a joint part (such as a neck joint and a waist joint) of the torso part 42, a wheel, a caterpillar, and the like as a movable mechanism for realizing movement such as movement and gesture.
- movable parts such as a joint part of the manipulator 44 and a joint part (such as a neck joint and a waist joint) of the torso part 42, a wheel, a caterpillar, and the like as a movable mechanism for realizing movement such as movement and gesture.
- the autonomous robot 1 serves as a sensor for acquiring information such as a moving distance, a moving speed, a moving direction, and a posture (hereinafter, referred to as an inner field sensor). It comprises a measuring device (Inertial ⁇ Measurement ⁇ Unit: IMU) 20 and an encoder (or potentiometer) 28 for detecting a driving amount of an actuator 27.
- IMU Inertial ⁇ Measurement ⁇ Unit
- an encoder or potentiometer
- an acceleration sensor, an angular velocity sensor, or the like can be used as the inner field sensor.
- the autonomous robot 1 captures an external situation as a sensor (hereinafter, referred to as an external sensor) that acquires information such as the terrain around the own device and the distance and direction to an object existing around the own device.
- the camera includes a camera 19 and a ToF (Time @ of @ Flight) sensor 21 for measuring a distance to an object existing in a specific direction with respect to the own apparatus.
- ToF Time @ of @ Flight
- LIDAR Light Detection and Ranging or Laser Imaging and Detection and Ranging
- GPS Global Positioning System
- magnetic sensor magnetic sensor
- Bluetooth registered trademark
- Wi-Fi Wi-Fi
- a measuring unit hereinafter, referred to as a radio field intensity sensor of the radio field intensity in the wireless communication unit 16 such as a trademark may be used.
- the autonomous robot 1 includes a touch sensor 22 for detecting a physical pressure received from the outside, a microphone 23 for collecting external sounds, and a speaker 24 for outputting sounds and the like to the surroundings. Also, a display unit 25 for displaying various information to a user or the like may be provided.
- the movable part 26 of the autonomous robot 1 has a six-axis force sensor 501, a three-axis force sensor 502, a slip sensor 503, and a distance measurement sensor 504 as sensors for controlling the transfer of the object B1.
- the # 6-axis force sensor 501 is attached to, for example, a wrist portion of the manipulator 44, and detects the magnitude and direction of the force and torque applied to the wrist portion.
- the # 3-axis force sensor 502 is attached to each finger joint of the hand 443, for example, and detects the magnitude and direction of the force or torque applied to the finger joint.
- the slip sensor 503 is attached to, for example, a portion of the hand portion 443 that contacts the object B1 to be gripped, such as a palm or a belly of a finger, and the magnitude (slip amount) of the shear slip between the object B1 and the portion that contacts the object B1.
- the direction is detected. Further, the slip sensor 503 may detect the magnitude (initial slip amount) and the direction of the initial slip generated between the object B1 and a portion that comes into contact with the object B1.
- the slip sensor 503 includes, for example, a vision sensor that observes deformation of a viscoelastic body having a predetermined shape attached to a portion of the hand 443 that contacts the object B1, a pressure distribution sensor that measures a two-dimensional distribution of pressure, and the like. Can be used.
- the distance measurement sensor 504 is attached to a place where the object B1 gripped by the hand 443 can be observed, such as the wrist, palm, back of the hand, or fingertip of the manipulator 44, and measures the distance between the hand 443 and the object B1. .
- the 3-axis sensor 261 is attached to, for example, a shoulder portion, and detects a roll angle, a pitch angle, and a yaw angle of the upper arm 441 with respect to the body 42.
- the 1-axis sensor 262 is attached to, for example, an elbow, and detects a pitch angle of the forearm 442 with respect to the upper arm 441.
- the # 3-axis sensor 263 is attached to, for example, a wrist, and detects a roll angle, a pitch angle, and a yaw angle of the hand 443 with respect to the forearm 442.
- the # 1-axis sensor 264 is attached to each finger joint of the hand 443, for example, and detects the pitch angle of each joint.
- the two-axis sensor 265 is attached to, for example, a joint between the bogie unit 43 and the body unit 42, and detects a roll angle and a pitch angle of the body unit 42 with respect to the bogie unit 43.
- the 3-axis sensor 266 is attached to, for example, a neck and detects a roll angle, a pitch angle, and a yaw angle of the head 41 with respect to the body 42.
- various sensors such as the IMU 20, the touch sensor 22, the ToF sensor 21, the microphone 23, the speaker 24, and the encoder (or potentiometer) 28, the display unit 25, the actuator 27, the camera 19, and the battery 18 are respectively connected to the control unit 10 Is connected to the signal processing circuit 11.
- the signal processing circuit 14 sequentially captures sensor data, image data, and audio data supplied from the above-described various sensors, and sequentially stores them at predetermined positions in the DRAM 13 via the internal bus 17.
- the signal processing circuit 11 sequentially takes in remaining battery power data indicating the remaining battery power supplied from the battery 18 and stores the data in a predetermined position in the DRAM 13.
- the sensor data, image data, audio data, and remaining battery data stored in the DRAM 13 in this manner are used when the CPU 12 controls the operation of the autonomous robot 1 and, if necessary, a wireless communication unit. 16 to an external server or the like.
- the wireless communication unit 16 communicates with an external server or the like via a predetermined network such as a wireless LAN (Local Area Network) or a mobile communication network in addition to Bluetooth (registered trademark) and Wi-Fi (registered trademark). It may be a communication unit for performing communication.
- the CPU 12 loads the control program stored in the memory card 30 or the flash ROM 14 loaded in the PC card slot (not shown) via the PC card I / F 15 or directly. Read and store this in the DRAM 13.
- the CPU 12 based on the sensor data, image data, audio data, and remaining battery power data sequentially stored in the DRAM 13 from the signal processing circuit 11 as described above, the status of its own device and the surroundings, and instructions from the user. Judge whether there is any action or not.
- the CPU 12 performs self-position estimation and various operations by using map data stored in the DRAM 13 or the like or map data acquired from an external server or the like via the wireless communication unit 16 and various information. You may.
- the CPU 12 determines the subsequent action based on the above-described determination result, the estimated self-position, the control program stored in the DRAM 13, and drives the necessary actuator 27 based on the determination result.
- various actions such as movement and gesture are executed.
- the CPU 12 generates audio data as necessary, and supplies the generated audio data to the speaker 24 via the signal processing circuit 11 as an audio signal, thereby outputting an audio based on the audio signal to the outside or a display unit 25. To display various information.
- the autonomous robot 1 is configured to be able to act autonomously according to its own and surrounding conditions, and instructions and actions from the user.
- FIG. 5 is an external view illustrating a configuration example of a hand portion according to the present embodiment.
- FIG. 6 is an external view showing a configuration example when the hand is viewed from the direction A2 in FIG.
- the hand portion 443 is, for example, a base portion 4431 corresponding to the palm and the back of the hand, and two finger portions 4432 a and 4432 b (hereinafter, when the finger portions 4432 a and 4432 b are not distinguished, Reference numeral 4432).
- the base portion 4431 is attached to the forearm portion 442 via, for example, a joint mechanism 444 corresponding to a wrist.
- the joint mechanism 444 corresponding to the wrist includes a six-axis force sensor 501 that detects the magnitude and direction of the force and torque applied to the wrist, the roll angle of the hand 443 with respect to the forearm 442, A three-axis sensor 263 for detecting the pitch angle and the yaw angle is provided.
- Each finger 4432 corresponds to a base joint 4434 attached to a base 4431 via a joint mechanism 4433 corresponding to a finger joint (third joint), and corresponds to a finger joint (first joint) to the base joint 4434. And a terminal section 4436 attached via a joint mechanism 4435.
- These two finger portions 4432a and 4432 are attached to the base portion 4431 so that, for example, the surfaces corresponding to the belly of the finger face each other.
- the joint mechanisms 4433 and 4435 corresponding to the respective finger joints of the respective finger portions 4432 include the three-axis force sensor 502 for detecting the magnitude and direction of the force or torque applied to each finger joint, A one-axis sensor 264 for detecting the pitch angle of the joint is provided.
- a slip sensor 503 is provided at a portion of the finger portion 4432 that contacts the object B1 when the object B1 is gripped, for example, at a portion corresponding to the finger pad of the distal end portion 4436.
- a ToF sensor 504a as a distance measuring sensor 504 is provided on a surface corresponding to the palm of the base portion 4431, that is, a surface facing the slip sensor 503 attached to the distal end portion 4436 when the finger portion 4432 is folded. Have been.
- a camera 504b as a distance measuring sensor 504 is provided at a portion corresponding to the back of the hand in the base portion 4431 so as to adjust the roll angle, the pitch angle, and the yaw angle.
- FIG. 7 is a block diagram illustrating an example of a functional configuration for executing a transfer operation of the autonomous robot according to the present embodiment.
- the configurations other than the physical interaction execution unit 55 and the learning information storage unit 54 may include, for example, a predetermined program stored in the flash ROM 14 or a memory card by the CPU 12 illustrated in FIG. , By executing a program downloaded via the wireless communication unit 16.
- the physical interaction execution unit 55 can be realized by, for example, the CPU 12, the signal processing circuit 11, the movable unit 26, and the actuator 27 illustrated in FIG.
- the learning information storage unit 54 can be realized by, for example, the flash ROM 14 or the memory card 30 illustrated in FIG.
- the autonomous robot 1 includes, as a functional configuration for performing an operation of exchanging the object B1 with the person to be exchanged, a person to be exchanged recognition unit 51, an object recognition unit 52, and an exchange activity planning unit 53.
- a learning information storage unit 54 a physical interaction execution unit 55, a grip information acquisition unit 56, a response time measurement unit 57, an emotion map generation unit 58, and a transfer behavior evaluation unit 59.
- the sensors provided in the autonomous robot 1 for example, the camera 19, the microphone 23, the six-axis force sensor 501, the three-axis force sensor 502, the slip sensor 503, and the distance measurement sensor 504 exchange data.
- a sensor group 50 for acquiring various information used in the execution of the operation is configured.
- the transfer target person recognition unit 51 analyzes the image data acquired by the camera 19 and the audio data input from the microphone 23, for example, to determine whether the transfer target person and / or the hand H1 of the transfer target person exists or not. It recognizes the intention of the target person to receive the object B ⁇ b> 1, the behavior and situation of the transfer target person (hereinafter, referred to as the posture of the transfer target person), and inputs the result to the transfer behavior planning unit 53.
- the transfer target person recognition unit 51 analyzes image data acquired by the camera 19 and audio data input from the microphone 23 to release Detects emotional changes in the recipient during the movement process, for example, changes in emotions such as interrupting or abandoning the receiving operation due to reasons such as hot or cold or timing mismatch, and generates an emotion map based on the result. Input to the section 58.
- the object recognition unit 52 is based on, for example, an analysis result of image data acquired by the camera 19, sensor data input from the six-axis force sensor 501, the three-axis force sensor 502, the slip sensor 503, and the like. Recognize or estimate the position of the object B1 and characteristics of the object B1, for example, static friction coefficient, dynamic friction coefficient, mass, shape and size, stiffness, strength, temperature, humidity, and the like, and input the result to the transfer behavior planning unit 53. .
- the learning information storage unit 54 stores, for example, a learned model constructed by performing machine learning on a transfer operation performed in the past, a transfer action plan planned by the transfer action planning unit 53, an evaluation result thereof, and the like.
- the configuration in which a learned model is constructed by performing machine learning on a transfer operation performed in the past may be, for example, disposed in the autonomous robot 1 or connected to the autonomous robot 1 via a predetermined network. May be arranged on a server that has been set up.
- the transfer behavior planning unit 53 stores in the learning information storage unit 54 the recognition result of the transfer target person input from the transfer target person recognition unit 51 and the recognition or estimation result of the object B1 input from the object recognition unit 52. Based on the learned model that has been learned, the previously planned transfer behavior plan, the evaluation result, and the like, a transfer behavior plan for delivering the object B1 to the transfer target person is created.
- the created exchange action plan may include, for example, an operation from an operation of lifting the object B1 from a table, a floor, or the like, or an operation of receiving the object B1 from a person or the like, to an operation of transferring the object B1 to the transfer target person.
- the physical interaction execution unit 55 executes the physical interaction (transfer action) for transferring the object B1 to the transfer target person by executing the transfer behavior plan created by the transfer behavior planning unit 53.
- the grip information acquisition unit 56 determines, for example, the start timing of the release operation by the physical interaction execution unit 55, for example, the slip amount detected by the slip sensor 503 or the initial slip amount or the slip amount detected by the distance measurement sensor 504. From the component in the direction opposite to gravity, the direction opposite to the rotational moment due to gravity, or the position direction of the hand H1 of the recipient, the timing at which the recipient starts the receiving operation is detected, and the detected timing is subjected to physical interaction. Input to the section 55.
- the grip information acquisition unit 56 includes, for example, a physical interaction execution unit. While the release operation is being performed by 55, the slip amount of object B1 detected by slip sensor 503 and / or distance measurement sensor 504 in transfer direction A1 is continuously input to physical interaction execution unit 55.
- the physical interaction execution unit 55 While the operation is being performed, the amount of sliding of the object B ⁇ b> 1 in the direction of gravity detected by the slip sensor 503 and / or the distance measuring sensor 504 is continuously input to the physical interaction executing unit 55.
- the grip information acquisition unit 56 is detected as an input in the machine learning of the transfer operation by, for example, various sensors provided in the hand unit 443 during the period in which the physical interaction execution unit 55 is performing the release operation.
- Information for example, information such as the gripping force detected by the three-axis force sensor 502 and the slip amount and / or initial slip amount detected by the slip sensor 503 is input to a machine learning unit (not shown).
- the grip information acquisition unit 56 outputs, as an output in the machine learning of the transfer operation, for example, the distance to the object B1 detected by the distance measurement sensor 504 during the period in which the physical interaction execution unit 55 is executing the release operation. Is input to a machine learning unit (not shown).
- the response time measuring unit 57 is, for example, a time (response time) from when the autonomous robot 1 starts the release operation and reduces the gripping force of the hand unit 443 to when the moving speed of the object B1 in the transfer direction A1 increases. Is measured, and the measured response time is input to the physical interaction execution unit 55.
- the emotion map generation unit 58 releases, for example, based on the information regarding the change in the emotion of the transfer target person input from the transfer target person recognition unit 51 during the period in which the physical interaction execution unit 55 is performing the release operation.
- An emotion map is generated by mapping the change in the emotion of the person to be exchanged along the time axis during the operation, and the generated emotion map is input to the exchange behavior evaluation unit 59.
- the emotions to be mapped may include not only negative emotions such as hot, cold, or inconsistent timing, but also positive emotions such as comfortable reception.
- the exchange behavior evaluation unit 59 evaluates the exchange behavior plan planned by the exchange behavior plan unit 53 based on the emotion map input from the emotion map generation unit 58, and evaluates the evaluation result together with the exchange behavior plan in the learning information storage unit. Input to 54.
- FIG. 8 is a schematic diagram illustrating an example of input / output information in the machine learning process.
- information detected by various sensors provided in the hand portion 443 on the input layer or information obtained from these information for example, three axes Information such as the gripping force detected by the force sensor 502 or the rate of change thereof, the amount of slip detected by the slip sensor 503 and / or the initial amount of slip or the rate of change thereof is given to the output layer in the transfer direction A1.
- a higher reward is set as the continuity of the moving speed of the object B1 in the transfer direction A1 is higher.
- the lower the continuity of the moving speed of the object B1 in the transfer direction A1 that is, the more the amount of movement of the object B1 in the transfer direction A1 per unit time is discontinuous, the more negative reward is set. I do.
- a reward may be set based on the emotion of the recipient and / or a change thereof during the period in which the physical interaction executing unit 55 executes the release operation. For example, a negative reward is set if a negative emotion is recognized from the recipient during the release operation, and a positive reward is set if a positive emotion is recognized. You may make it. Whether the transfer target person has a negative emotion or a positive emotion can be determined, for example, based on the emotion map generated by the emotion map generation unit 58.
- FIG. 9 is a flowchart illustrating a schematic operation example according to the present embodiment.
- the object recognizing unit 52 recognizes the position, the shape, and the size of the object B1 to be transmitted / received, for example, by analyzing the image data acquired by the camera 19 (Step S101).
- the physical interaction executing unit 55 grips the object B1 based on the position, the shape, and the size of the object B1 recognized by the object recognition unit 52 (Step S102).
- the object recognition unit 52 calculates a load caused by lifting the object B1 (step S103), and based on the calculated load, characteristics of the object B1, for example, static friction coefficient, dynamic friction coefficient, mass, rigidity, strength, and the like. Recognize or estimate temperature, humidity, etc. (step S104).
- the transferee recognition unit 51 recognizes the position of the transferee by analyzing, for example, the image data acquired by the camera 19 and the audio data input from the microphone 23 (step S105). Then, the transfer behavior planning unit 53 creates a movement plan to the delivery location of the object B1 determined from the position of the transfer target person recognized by the transfer target person recognition unit 51 (step S106). Then, the physical interaction execution unit 55 moves the autonomous robot 1 to the delivery location according to the movement plan created by the transfer behavior planning unit 53 (Step S107).
- the transferee recognition unit 51 recognizes the position of the hand H1 of the transferee, for example, by analyzing image data acquired by the camera 19 (step S108).
- 53 creates a posture control plan up to the delivery posture of the autonomous robot 1 determined from the position of the hand H1 of the transfer target person recognized by the transfer target person recognition unit 51 (step S109).
- the physical interaction executing unit 55 controls the posture of the autonomous robot 1 according to the posture control plan created by the transfer behavior planning unit 53 (Step S110).
- the posture control of the autonomous robot 1 may include, for example, control of the manipulator 44 and control of the inclination of the body 42 and the head 41.
- the action, the situation, and the like of the hand H1 of the transfer target person can be recognized.
- the transfer behavior planning unit 53 uses the characteristics of the object B1 recognized or estimated by the object recognition unit 52 and the position, behavior, situation, etc. of the hand H1 of the transfer target person recognized by the transfer target person recognition unit 51. Based on the learned model stored in the learning information storage unit 54, the past planned / received action plan, the evaluation result thereof, and the like, a transfer operation plan for transferring the object B1 to the transfer target person is created (step S1). (S111) Then, the physical interaction execution unit 55 executes a transfer action of transferring the object B1 to the transfer target in accordance with the transfer action plan created by the transfer action planning unit 53 (step S112), and then performs the actual action To end.
- FIG. 10 is a flowchart illustrating a transfer operation according to the first example of the present embodiment.
- the physical interaction execution unit 55 executes the release operation only by the grip force control is illustrated.
- the grip information acquisition unit 56 detects, for example, the slip amount U detected by the slip sensor 503, the initial slip amount u, or the distance measurement sensor 504. It is determined whether or not the component of the slip amount U in the direction opposite to the gravity, the direction opposite to the rotational moment due to the gravity, or the position direction of the hand H1 of the recipient (hereinafter, referred to as a specific direction) is larger than zero. A determination is made (step S121). That is, it is determined whether or not the receiving operation by the transfer target person has been started.
- step S121 When the sliding amount U in the specific direction or the initial sliding amount u is equal to or less than zero (NO in step S121), that is, when the receiving operation by the person to be transferred has not started yet, the operation returns to step S121, and the transfer is performed. It waits for the subject to start the receiving operation.
- the gripping information acquisition unit 56 performs, for example, The transfer direction A1 is specified by analyzing the image data obtained by the camera 504b in the distance measuring sensor 504 (step S122).
- the specified transfer direction A1 is input from the grip information acquisition unit 56 to the physical interaction execution unit 55 together with the trigger for starting the release operation based on the slip amount U or the initial slip amount u in the specific direction becoming larger than zero.
- the physical interaction execution unit 55 starts a release operation of the object B1 in the transfer direction A1 (step S123).
- the physical interaction execution unit 55 next executes grip force control of the hand unit 443 (step S124). Specifically, the physical interaction execution unit 55 controls the amount of change in force per unit time when the gripping force F generated on the hand 443 for gripping the object B1 is reduced toward release.
- the grip information acquisition unit 56 continues to measure the change in the amount of sliding of the object B1 in the transfer direction A1, thereby moving the object B1 in the transfer direction A1. Is measured (step S125).
- the measured moving speed of the object B1 in the transfer direction A1 is input to the physical interaction execution unit 55 and used for gripping force control in the physical interaction execution unit 55. That is, the physical interaction execution unit 55 controls the amount of decrease in the gripping force generated by the hand unit 443 per unit time so that the moving speed of the object B1 in the transfer direction A1 maintains continuity (feedback control). ).
- Step S126 determines whether or not the gripping force F generated on the hand unit 443 has reached zero (Step S126), and when it has reached zero (YES in Step S126), releases.
- the operation is completed (step S127), and the present exchange operation ends.
- step S126 if the gripping force F has not reached zero (NO in step S126), this operation returns to step S124, and the subsequent operations are repeatedly executed until the gripping force F reaches zero.
- the physical interaction execution unit 55 starts the release operation (step S123) on the basis of this, so that the release operation can be started in the initial stage of the transfer operation, so that the load on the object B1 and the hand Variations in the gripping force of the 443 can be minimized, thereby enabling smoother delivery of the object B1.
- the change in the amount of slip of the object B1 in the transfer direction A1 is continuously measured (step S125), and the measured amount of slip of the object B1 in the transfer direction A1 is measured.
- the amount of decrease in the gripping force F generated in the hand portion 443 in a unit time based on the change in, the sudden change in the displacement of the object B1 in the transfer direction A1 can be reduced.
- the smooth delivery of the object B1 becomes possible, and it is also possible to reduce the possibility that the object B1 is accidentally dropped and that the object B1 is unnaturally displaced up and down in the hand portion 443.
- the physical interaction execution unit 55 After starting the release operation in step S123, the physical interaction execution unit 55 increases the moving speed of the object B1 in the transfer direction A1 for a certain time or more based on the response time measured by the response time measurement unit 57, for example. If not, the release operation may be stopped or terminated.
- the case where the hand 443 has two fingers has been exemplified.
- the gripping force control other than the two fingers that sandwich the object B1 is used.
- the grip force may be gradually reduced from the finger portion. Thereby, the danger of the object B1 falling can be reduced, and a more stable transfer operation can be performed.
- FIG. 11 is a flowchart illustrating a transfer operation according to the second example of the present embodiment.
- the second example exemplifies a case in which the physical interaction execution unit 55 performs the release operation by blending the grip force control and the arm operation control at the blend ratio specified based on the learned model.
- step S224 the physical interaction executing unit 55 executes the grip force control of the hand 443 and the arm operation control of the arm blended at the blending ratio specified based on the learned model. Specifically, the physical interaction executing unit 55 controls the amount of force change per unit time when the gripping force F generated on the hand unit 443 for gripping the object B1 is decreased toward release. Position control per unit time when the position of the object B1 is moved in the transfer direction A1 by changing the posture of the arm unit which holds the object B1 and arranges the object B1 in the target coordinate space while performing the force control. Control the amount. At this time, the amount of decrease in the gripping force F per unit time and the amount of change in the posture of the arm are blended at the above-described blend ratio so that the moving speed of the object B1 in the transfer direction A1 maintains continuity. You.
- FIG. 12 is a flowchart illustrating a transfer operation according to the third example of the present embodiment.
- the physical interaction execution unit 55 exemplifies a case where, as the release operation, the gripping force control, the arm operation control, and the whole body operation control are blended and executed at a blending ratio specified based on the learned model. I do.
- step S324 the physical interaction execution unit 55 performs the grip force control of the hand unit 443 blended at the blend ratio specified based on the learned model, the arm operation control of the arm unit, and the whole body operation control of the autonomous robot 1. Run. Specifically, the physical interaction executing unit 55 controls the amount of force change per unit time when the gripping force F generated on the hand unit 443 for gripping the object B1 is decreased toward release. Force control), and the amount of change in position per unit time when the position of the object B1 is moved in the transfer direction A1 by changing the attitude of the arm unit holding the object B1 and arranging the object B1 in the target coordinate space.
- FIG. 13 is a flowchart illustrating a transfer operation according to the fourth example of the present embodiment.
- the fourth example for example, in addition to the transfer operation shown in the third example, a case where the physical interaction execution unit 55 executes a stop or an end of the release operation according to a change in the emotion of the transfer target person is illustrated.
- the third example is used as a base.
- the present invention is not limited to this, and it is also possible to use the first example or the second example as a base.
- Steps S401 to S403 are added.
- step S401 the transferee recognition unit 51 detects a change in the emotion of the transferee by analyzing, for example, image data acquired by the camera 19 or audio data input from the microphone 23. Information about the detected change in the emotion of the transfer target person may be input to the emotion map generation unit 58 as, for example, input or output of machine learning.
- step S402 the transfer target person recognition unit 51 determines whether the detected changed emotion is a specific emotion.
- the specific emotion may be, for example, a negative emotion that leads to interruption or abandonment of the transfer operation by the transfer target person. If a specific emotion has not been detected (NO in step S402), the operation proceeds to step S125. On the other hand, if a specific emotion has been detected (YES in step S402), the physical interaction executing unit 55 stops or ends the release operation (step S403), and thereafter returns to step S121.
- the instruction to stop the release operation may be input directly from the transfer target person recognition unit 51 to the physical interaction execution unit 55 or may be input via the emotion map generation unit 58.
- the transfer target person recognition unit 51 analyzes the image data from the camera 19 and the audio data from the microphone 23, so that the transfer target A configuration may be adopted in which a change in emotion is detected, and if the detected emotion is a specific emotion that leads to interruption or abandonment of the transfer operation, the release operation by the physical interaction execution unit 55 may be interrupted. This allows the physical interaction execution unit 55 to promptly stop or end the release operation when the receiving / receiving person interrupts or abandons the receiving operation, so that the object B1 falls or the content jumps out. Can be reduced.
- FIG. 14 is a diagram for explaining blending between the gripping operation and the release operation according to the present embodiment.
- the physical interaction execution unit 55 performs the release operation by blending the gripping force control, the arm operation control, and the whole body operation control at the blending ratio specified based on the learned model (first operation). 3).
- the autonomous robot 1 when transferring the grasped object to the transfer target person, blends (adds) the grasping operation 71 and the release operation 72 at a blend ratio 76, and based on the result (73). Then, the gripping force control 74 is executed. In the process in which the autonomous robot 1 releases the object B1, the autonomous robot 1 finally releases the object B1 by reducing the blend ratio 76.
- the blending ratio 76 may be determined by the sensor data (for example, the six-axis force sensor 501, the three-axis force sensor 502, and the slip sensor 503) of the manipulator 44 after the physical interaction execution unit 55 executes the release operation.
- the load can be calculated (75) from the gripping force F, the rotational moment F, and the like, and can be obtained based on the result.
- the blend ratio thus obtained is also used as a parameter in the arm operation control 77.
- the arm operation control 77 is based on sensor data (gripping) detected by various sensors of the manipulator 44 (such as the 6-axis force sensor 501, the 3-axis force sensor 502, and the slip sensor 503) after the physical interaction execution unit 55 executes the release operation.
- the control amount is determined based on the force F, the rotational moment F, etc.) and the blend ratio 76, and the control amount of the whole-body operation control 78 is determined so as to support the arm operation control 77.
- FIG. 15 is a diagram for describing blending between a gripping operation and a release operation according to a modification example of the present embodiment.
- the blending ratio 76 is detected by the response time T measured by the response time measuring unit 57 and the target person recognition unit 51 in addition to the sensor data (gripping force F, rotational moment F, etc.). It may be determined based on the change in the emotion of the given / received person. Thus, for example, when the recipient is uncomfortable, it is possible to adjust the blend ratio 76 so as to release the object B1 more carefully or promptly.
- FIG. 16 is a schematic diagram illustrating an example of a slip sensor configured using a vision sensor.
- a slip sensor 503A configured using a vision sensor 83 includes a deforming portion 82 provided in a part of a housing 81 of the autonomous robot 1 and a deforming portion 82 from the inside of the housing 81.
- a vision sensor 83 for observing deformation.
- the deforming portion 82 is made of, for example, a viscoelastic body 821 such as silicone rubber.
- the viscoelastic body 821 is provided with, for example, a plurality of markers 822 arranged in a two-dimensional lattice.
- the slip sensor 503A observes the area of the marker 822 that has been deformed or displaced by the vision sensor 83, and determines which area is under pressure, that is, which area is in contact with the object B1. Can be identified.
- an initial slip occurs as a stage before the object B1 actually starts to slide on the deformed portion 82.
- the initial slip is a phenomenon in which slip does not occur in the central part of the contact area and slip occurs in the peripheral part.
- the slip sensor 503A configured using the vision sensor 83
- the state in which the hand 443 grips and lifts the object B1 is used as a reference, and from this state, the rotational direction due to gravity and the direction opposite to gravity.
- An initial slip in the opposite direction or in the direction of the hand H1 of the recipient is detected by the slip sensor 503A, and the timing at which the initial slip is detected can be set as the timing at which the recipient starts the receiving operation. .
- FIG. 21 is a schematic diagram illustrating an example of a slip sensor configured using a pressure distribution sensor.
- the surface of the slip sensor 503B configured using the pressure distribution sensor 91 is distorted in contact with the object B1. Therefore, when the object B1 is brought into contact with the slip sensor 503B, as shown in FIG. 22, the pressure detected at the central portion in the contact region R2 becomes highest. Note that in FIGS. 22 and 23, the color density in the contact region R2 indicates the level of pressure.
- the slip sensor 503B configured using the pressure distribution sensor 91 is used, the state in which the hand 443 grips and lifts the object B1 is used as a reference, and the rotational moment due to gravity is opposite to the gravity from this state. It is possible that the slip sensor 503B detects an initial slip in a direction opposite to the direction of the hand H1 of the recipient, and the slip sensor 503B, and the timing at which the initial slip is detected can be the timing at which the recipient starts the receiving operation. Become.
- the autonomous robot 1 can be connected to the server 2 via a predetermined network 3 as illustrated in FIG. 24, for example, as illustrated in FIG. You may.
- the server 2 may be a server group including a plurality of servers, such as a cloud server.
- the above-described machine learning unit may be arranged in each autonomous robot 1 or may be arranged in the server 2. Regardless of whether the machine learning unit is arranged in each autonomous robot 1 or the server 2, the result of the machine learning by the machine learning unit (for example, a learned model) is connected via a predetermined network 3. It is preferable that a plurality of autonomous robots 1 can share the information.
- a general-purpose computer or a processor may be used for the device that implements the machine learning unit, or a general-purpose computer-on-graphics-processing unit (GPGPU), a large-scale personal computer (PC), or an FPGA (field).
- GPGPU general-purpose computer-on-graphics-processing unit
- PC large-scale personal computer
- FPGA field
- -Programmable ⁇ Gate ⁇ Array may be used.
- the change rate of the gripping force of the hand 443 and information detected by various sensors mounted on the hand 443 are input, and the continuity of the moving speed of the object B1 in the transfer direction A1 is output. And perform machine learning. As a result, smoother delivery of the object B1 according to the characteristics (static friction coefficient, dynamic friction coefficient, mass, shape, size, rigidity, strength, temperature, humidity, etc.) of the object B1 and the behavior and situation of the person to be given and received is achieved. Can be made possible.
- a change in the slip amount or the initial slip amount of the object B1 gripped by the hand 443 is measured, and the direction of the change is opposite to the direction of gravity, opposite to the rotational moment due to gravity, or
- the release operation is started when there is a component in the direction of the position of the hand H1 of the recipient.
- the release operation can be started at the initial stage of the transfer operation, so that the load applied to the object B1 and the fluctuation of the gripping force of the hand 443 can be minimized, thereby making the operation smoother. It is possible to deliver a simple object B1.
- the amount of slip of the object B1 in the direction of gravity or the initial amount of slip is continuously measured. Accordingly, it is possible to reduce the possibility that the object B1 is accidentally dropped or the object B1 is unnaturally displaced up and down in the hand portion 443.
- the presence or the change of emotion of the person to be given / received obtained from the image input information, the voice input information, and the like are added to the input and output of the machine learning. As a result, it is possible to further improve the quality of the physical interaction.
- the release operation can be started after confirming the existence of the transfer target person and the existence of the hand H1 using the result of the image processing and the information from the distance measurement sensor.
- the release operation can be started after confirming the existence of the transfer target person and the existence of the hand H1 using the result of the image processing and the information from the distance measurement sensor.
- the case where the object B1 is gripped by the single manipulator 44 and delivered to the recipient is exemplified.
- the present invention is not limited to this.
- the above-described embodiment can be applied to a case where the object B1 is gripped by using both the manipulators 44R and 44L and delivered to the recipient.
- not only one autonomous robot 1 but also a plurality of autonomous robots 1 may cooperate to lift the object B1 and deliver it to one or more recipients. It is possible.
- An information processing apparatus comprising: a control unit configured to control the manipulator so as to maintain continuity of a moving speed of the object when transferring the object held by the manipulator to a person to be exchanged.
- a control unit configured to control the manipulator so as to maintain continuity of a moving speed of the object when transferring the object held by the manipulator to a person to be exchanged.
- the control unit controls a change amount of a gripping force of the manipulator per unit time so that the moving speed of the object in the transfer direction maintains continuity.
- the manipulator includes a hand for gripping the object, and an arm having the hand attached to one end, The control unit controls the amount of change in the gripping force of the hand unit per unit time so that the moving speed of the object in the transfer direction in the transfer direction maintains continuity, and controls the posture of the arm unit per unit time.
- the information processing apparatus which controls a change amount of the information.
- the manipulator is attached to a movable movable body, The information according to any one of (1) to (3), wherein the control unit controls the manipulator and controls the movement of the moving body so that the moving speed of the object maintains continuity. Processing equipment.
- the object further includes a first detection unit that detects a slip amount of the object in a portion in contact with the manipulator, The control unit controls the manipulator based on the slip amount detected by the first detection unit so that the moving speed of the object maintains continuity. Any of the above (1) to (4) 2.
- the information processing apparatus according to claim 1.
- the object further includes a second detection unit that detects a slip amount or an initial slip amount of the object in a portion in contact with the manipulator,
- the control unit when the change in the slip amount or the initial slip amount detected by the second detection unit includes a component in a direction different from the direction of gravity, the operation of transferring the object to the person to be exchanged is
- the information processing apparatus according to any one of (1) to (5), wherein the information processing apparatus is started by a manipulator.
- the direction different from the gravity direction is a direction opposite to gravity, a direction opposite to a rotational moment due to gravity, or a direction of transfer to the transfer target person.
- the object further includes a third detection unit that detects an amount of slip or an initial amount of slip of the object in a direction of gravity in a portion where the object contacts the manipulator,
- the control unit according to any one of (1) to (7), wherein the control unit controls the manipulator based on the slip amount in the gravity direction or the initial slip amount detected by the third detection unit.
- Information processing device Further comprising a subject recognition unit for recognizing the emotion of the transfer target in operation of transferring the object to the transfer target, The control unit stops or ends the operation of delivering the object to the transfer target based on a change in the emotion of the transfer target detected by the target recognition unit.
- the information processing apparatus according to any one of claims 1 to 3.
- the target person recognizing unit recognizes the emotion of the transfer target person based on at least one of image data obtained by capturing the transfer target person and voice data obtained by collecting voices of the transfer target person.
- the information processing device according to (9).
- (11) Further comprising a planning unit that plans a transfer operation to be performed by the manipulator to deliver the object to the transfer target person, The controller according to any one of (1) to (10), wherein the controller controls the manipulator according to the transfer operation planned by the planning unit so that the moving speed of the object maintains continuity.
- An information processing apparatus according to claim 1.
- (12) Further comprising an object recognition unit that recognizes or estimates the characteristics of the object, The information processing device according to (11), wherein the planning unit plans the transfer operation based on characteristics of the object recognized or estimated by the object recognition unit.
- the information processing apparatus wherein the characteristics of the object include at least one of static friction coefficient, dynamic friction coefficient, mass, shape and size, rigidity, strength, temperature, and humidity.
- the information processing apparatus according to any one of (11) to (13), wherein the planning unit plans the transfer operation according to a learned model.
- the learned model is a model created by machine learning in which the rate of change in gripping force of the manipulator for gripping the object is input and the continuity of the moving speed of the object in the transfer direction is output.
- the information processing device according to (14).
- the information processing apparatus wherein the information is machine learning.
- the first detection unit includes at least one of a viscoelastic body, a vision sensor, a pressure distribution sensor, and a distance measurement sensor.
- the second detection unit includes a viscoelastic body and a vision sensor, or a pressure distribution sensor.
- a control method for controlling the manipulator such that when transferring an object held by a manipulator to a person to be transferred, the moving speed of the object maintains continuity.
- a program for causing a computer that controls the manipulator to function A program for causing the computer to execute control of the manipulator so as to maintain continuity of the moving speed of the object when the object held by the manipulator is transferred to a person to be transferred.
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Orthopedic Medicine & Surgery (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Manipulator (AREA)
Abstract
Description
1.一実施形態
1.1 自律ロボットの概要
1.2 物体の授受動作について
1.3 自律ロボットの概略構成
1.4 手部の構成
1.5 自律ロボットの機能構成
1.6 授受行動の機械学習について
1.7 動作例
1.8 授受動作の具体例
1.8.1 第1の例
1.8.2 第2の例
1.8.3 第3の例
1.8.4 第4の例
1.9 把持動作とリリース動作とのブレンドについて
1.9.1 変形例
1.10 初期滑りの計測について
1.10.1 ビジョンセンサ
1.10.2 圧力分布センサ
1.11 システム構成
1.12 作用・効果
以下、本開示の一実施形態に係る情報処理装置、制御方法及びプログラムについて、図面を参照して詳細に説明する。本実施形態では、ロボットハンドや人型ロボットやペットロボットなどのアーム(マニピュレータともいう)を備えた自律ロボットと人との間で未知の物体を授受する物理インタラクションをスムーズに行なうことを可能にする情報処理装置、制御方法及びプログラムについて、例を挙げて説明する。
図1は、本実施形態に係る自律ロボットから人(以下、授受対象者という)へ物体を渡す動作を説明するための模式図である。図1に示すように、自律ロボット1は、例えば、頭部41と、胴体部42と、台車部43と、マニピュレータ44L及び44Rとを備える人型ロボットである。胴体部42と台車部43とは、例えば、床P1などの上を移動可能な移動体を構成する。
このような自律ロボット1では、手部443に把持した物体B1を授受対象者の手H1に手渡しする場合、図2に示すように、授受動作の実行期間中、物体B1の授受方向A1への変位が連続的、例えば直線的なものとなることが好ましい。一方、図3に示すように、授受動作の実行期間中の物体B1の授受方向A1への変位に急激な変化が含まれる場合、手部443から授受対象者の手H1へ物体B1をうまく授受することができず、物体B1が落下したり、物体B1の内容物がこぼれたりするなどの事象が発生してしまう可能性が高い。
次に、本実施形態に係る自律ロボット1の概略構成について、図面を参照して詳細に説明する。図4は、本実施形態に係る自律ロボットの概略構成例を示すブロック図である。図4に示すように、自律ロボット1は、例えば、CPU(Central Processing Unit)12、DRAM(Dynamic Random Access Memory)13、フラッシュROM(Read Only Memory)14、PC(Personal Computer)カードインタフェース(I/F)15、無線通信部16及び信号処理回路11が内部バス17を介して相互に接続されることにより形成されたコントロール部10と、この自律ロボット1の動力源としてのバッテリ18とを備える。
つづいて、本実施形態に係る自律ロボット1の手部443の構成について、図面を参照して詳細に説明する。なお、本説明では、簡略化のため、手部443が2指である場合を例示する。図5は、本実施形態に係る手部の構成例を示す外観図である。また、図6は、図5における方向A2から手部を見た場合の構成例を示す外観図である。
次に、本実施形態に係る自律ロボット1の授受動作を実行するための機能構成について、図面を参照して詳細に説明する。図7は、本実施形態に係る自律ロボットの授受動作を実行するための機能構成の一例を示すブロック図である。なお、図7に示す機能構成のうち、物理インタラクション実行部55及び学習情報記憶部54以外の構成は、例えば、図4に示すCPU12がフラッシュROM14又はメモリカードに格納されている所定のプログラム、若しくは、無線通信部16を介してダウンロードされたプログラムを実行することで、実現され得る。また、物理インタラクション実行部55は、例えば、図4に示すCPU12、信号処理回路11、可動部26及びアクチュエータ27で実現され得る。さらに、学習情報記憶部54は、例えば、図4に示すフラッシュROM14又はメモリカード30によって実現され得る。
ここで、授受行動の機械学習について説明する。図8は、機械学習プロセスにおける入出力情報の一例を示す模式図である。図8に示すように、本実施形態における授受行動の機械学習プロセスでは、入力層に手部443に設けられた各種センサで検出された情報又はこれらの情報から得られた情報、例えば、3軸力覚センサ502で検出された把持力又はその変化率や、滑りセンサ503で検出された滑り量及び/又は初期滑り量又はその変化率などの情報が与えられ、出力層に授受方向A1への物体B1の移動速度の連続性に関する情報が与えられて、入力層から隠れ層を介して出力層までの各層のノード(ニューロンともいう)60間を結ぶ各エッジ61の重みが求められる。これにより、授受動作に最適な学習済みモデルが作成される。なお、上述したように、授受対象者の感情を機械学習の入力や出力に使用することも可能である。
次に、本実施形態に係る自律ロボット1の動作例について、図面を参照して詳細に説明する。図9は、本実施形態に係る概略的な動作例を示すフローチャートである。図9に示すように、本動作では、まず、物体認識部52が、例えば、カメラ19で取得された画像データを解析することで、授受対象の物体B1の位置や形状寸法等を認識する(ステップS101)。つづいて、物体認識部52により認識された物体B1の位置や形状寸法等に基づき、物理インタラクション実行部55が物体B1を把持する(ステップS102)。
つづいて、図9のステップS112に示す授受動作について、いくつか例を挙げて説明する。
まず、第1の例に係る授受動作について説明する。図10は、本実施形態の第1の例に係る授受動作を示すフローチャートである。なお、第1の例では、物理インタラクション実行部55が把持力制御のみでリリース動作を実行する場合を例示する。
次に、第2の例に係る授受動作について説明する。図11は、本実施形態の第2の例に係る授受動作を示すフローチャートである。第2の例では、物理インタラクション実行部55が、リリース動作として、学習済みモデルに基づいて特定されたブレンド率で把持力制御とアーム動作制御とをブレンドして実行する場合を例示する。
次に、第3の例に係る授受動作について説明する。図12は、本実施形態の第3の例に係る授受動作を示すフローチャートである。第3の例では、物理インタラクション実行部55が、リリース動作として、学習済みモデルに基づいて特定されたブレンド率で把持力制御とアーム動作制御と全身動作制御とをブレンドして実行する場合を例示する。
次に、第4の例に係る授受動作について説明する。図13は、本実施形態の第4の例に係る授受動作を示すフローチャートである。第4の例では、例えば、第3の例に示した授受動作に加え、物理インタラクション実行部55が、授受対象者の感情の変化に応じてリリース動作の停止や終了を実行する場合を例示する。なお、本説明では、第3の例をベースとするが、これに限らず、第1の例又は第2の例をベースとすることも可能である。
次に、把持動作とリリース動作とのブレンドについて、図面を用いて詳細に説明する。図14は、本実施形態に係る把持動作とリリース動作とのブレンドを説明するための図である。なお、図14では、物理インタラクション実行部55が、リリース動作として、学習済みモデルに基づいて特定されたブレンド率で把持力制御とアーム動作制御と全身動作制御とをブレンドして実行する場合(第3の例)を例示する。
図15は、本実施形態の変形例に係る把持動作とリリース動作とのブレンドを説明するための図である。図15に示すように、ブレンド率76は、センサデータ(把持力F、回転モーメントF等)に加え、レスポンス時間計測部57で計測されたレスポンス時間Tや、授受対象者認識部51で検出された授受対象者の感情の変化に基づいて求められてもよい。これにより、例えば、授受対象者が不快を感じている場合には、より丁寧に若しくは迅速に物体B1をリリースするようにブレンド率76を調整することが可能となる。
次に、初期滑りの計測について、具体例を挙げて説明する。
まず、初期滑りを計測するためのセンサ(滑りセンサ503に相当)として、ビジョンセンサを用いた場合について説明する。図16は、ビジョンセンサを用いて構成された滑りセンサの一例を示す模式図である。
また、初期滑りを計測するためのセンサ(滑りセンサ503に相当)には、圧力分布センサを用いることもできる。図21は、圧力分布センサを用いて構成された滑りセンサの一例を示す模式図である。
本実施形態に係る自律ロボット1は、図24に例示するように、例えば、図24に例示するように、所定のネットワーク3を介してサーバ2に接続することが可能であってもよい。サーバ2は、例えばクラウドサーバなど、複数のサーバで構成されたサーバ群であってもよい。ネットワーク3は、例えば、インターネットやLANや移動体通信網等、種々のネットワークを適用することが可能である。
以上のように、本実施形態によれば、リリース動作の実行中に、物体B1の授受方向A1への滑り量の変化を計測し続け、物体B1の授受方向A1への移動速度の連続性が保たれるように、リリース動作が制御される。これにより、物体B1の授受方向A1への変位が急激に変化することを低減できるため、スムーズな物体B1の受け渡しが可能となる。
(1)
マニピュレータに把持された物体を授受対象者へ受け渡す際、前記物体の移動速度が連続性を保つように、前記マニピュレータを制御する制御部を備える情報処理装置。
(2)
前記制御部は、前記物体の授受方向への前記移動速度が連続性を保つように、前記マニピュレータの把持力の単位時間あたりの変化量を制御する前記(1)に記載の情報処理装置。
(3)
前記マニピュレータは、前記物体を把持する手部と、一方の端に前記手部が取り付けられたアーム部とを備え、
前記制御部は、前記物体の授受方向への前記移動速度が連続性を保つように、前記手部の把持力の単位時間あたりの変化量を制御するとともに、前記アーム部の姿勢の単位時間あたりの変化量を制御する
前記(1)に記載の情報処理装置。
(4)
前記マニピュレータは、移動可能な移動体に取り付けられ、
前記制御部は、前記物体の移動速度が連続性を保つように、前記マニピュレータを制御するとともに、前記移動体の移動を制御する
前記(1)~(3)の何れか1項に記載の情報処理装置。
(5)
前記物体が前記マニピュレータと接する部分における前記物体の滑り量を検出する第1検出部をさらに備え、
前記制御部は、前記第1検出部で検出された滑り量に基づいて、前記物体の前記移動速度が連続性を保つように、前記マニピュレータを制御する
前記(1)~(4)の何れか1項に記載の情報処理装置。
(6)
前記物体が前記マニピュレータと接触する部分における前記物体の滑り量又は初期滑り量を検出する第2検出部をさらに備え、
前記制御部は、前記第2検出部で検出された前記滑り量又は前記初期滑り量の変化が重力方向とは異なる方向の成分を含む場合、前記物体を前記授受対象者へ受け渡す動作を前記マニピュレータに開始させる
前記(1)~(5)の何れか1項に記載の情報処理装置。
(7)
前記重力方向とは異なる方向は、重力と反対方向、重力による回転モーメントと反対方向、又は、前記授受対象者への授受方向である前記(6)に記載の情報処理装置。
(8)
前記物体が前記マニピュレータと接触する部分における前記物体の重力方向の滑り量又は初期滑り量を検出する第3検出部をさらに備え、
前記制御部は、前記第3検出部で検出された前記重力方向の前記滑り量又は前記初期滑り量に基づいて前記マニピュレータを制御する
前記(1)~(7)の何れか1項に記載の情報処理装置。
(9)
前記物体を前記授受対象者へ受け渡す動作中の前記授受対象者の感情を認識する対象者認識部をさらに備え、
前記制御部は、前記対象者認識部で検出された前記授受対象者の感情の変化に基づいて、前記物体を前記授受対象者へ受け渡す動作を停止又は終了する
前記(1)~(8)の何れか1項に記載の情報処理装置。
(10)
前記対象者認識部は、前記授受対象者を撮像した画像データ及び前記授受対象者が発した声を集音した音声データのうちの少なくとも1つに基づいて、前記授受対象者の感情を認識する前記(9)に記載の情報処理装置。
(11)
前記物体を前記授受対象者へ受け渡すために前記マニピュレータに実行させる授受動作を計画する計画部をさらに備え、
前記制御部は、前記計画部で計画された前記授受動作に従って、前記物体の前記移動速度が連続性を保つように、前記マニピュレータを制御する
前記(1)~(10)の何れか1項に記載の情報処理装置。
(12)
前記物体の特性を認識又は推定する物体認識部をさらに備え、
前記計画部は、前記物体認識部で認識又は推定された前記物体の特性に基づいて、前記授受動作を計画する
前記(11)に記載の情報処理装置。
(13)
前記物体の特性は、静止摩擦係数、動摩擦係数、質量、形状寸法、剛性、強度、温度及び湿度のうちの少なくとも1つを含む前記(12)に記載の情報処理装置。
(14)
前記計画部は、学習済みモデルに従って前記授受動作を計画する前記(11)~(13)の何れか1項に記載の情報処理装置。
(15)
前記学習済みモデルは、前記マニピュレータが前記物体を把持する把持力の変化率を入力とし、前記物体の授受方向への移動速度の連続性を出力とした機械学習にて作成されたモデルである前記(14)に記載の情報処理装置。
(16)
前記機械学習は、前記物体の授受方向への前記移動速度の連続性が高い程、プラスの報酬が設定され、前記授受方向への前記移動速度の連続性が低い程、マイナスの報酬が設定された機械学習である前記(15)に記載の情報処理装置。
(17)
前記第1検出部は、粘弾性体及びビジョンセンサ、圧力分布センサ及び測距センサのうちの少なくとも1つを含む前記(5)に記載の情報処理装置。
(18)
前記第2検出部は、粘弾性体及びビジョンセンサ、又は、圧力分布センサを含む前記(6)又は(7)に記載の情報処理装置。
(19)
マニピュレータに把持された物体を授受対象者へ受け渡す際、前記物体の移動速度が連続性を保つように、前記マニピュレータを制御する制御方法。
(20)
マニピュレータを制御するコンピュータを機能させるためのプログラムであって、
前記マニピュレータに把持された物体を授受対象者へ受け渡す際、前記物体の移動速度が連続性を保つように、前記マニピュレータを制御することを前記コンピュータに実行させるためのプログラム。
2 サーバ
3 ネットワーク
10 コントロール部
11 信号処理回路
12 CPU
13 DRAM
14 フラッシュROM
15 PCカードI/F
16 無線通信部
17 内部バス
18 バッテリ
19 カメラ
20 IMU
21 ToFセンサ
22 タッチセンサ
23 マイク
24 スピーカ
25 表示部
26 可動部
261 3軸センサ(肩)
262 1軸センサ(肘)
263 3軸センサ(手首)
264 1軸センサ(指関節)
265 2軸センサ(腰)
266 3軸センサ(首)
27 アクチュエータ
28 エンコーダ(ポテンショメータ)
30 メモリカード
41 頭部
42 胴体部
43 台車部
44、44L、44R マニピュレータ
441 上腕部
442 前腕部
443 手部
444 関節機構
4431 基底部
4432、4432a、4432b 指部
4433、4435 関節機構
4434 基節部
4436 末節部
50 センサ群
501 6軸力覚センサ
502 3軸力覚センサ
503、503A、503B 滑りセンサ
504 測距センサ
504a ToFセンサ
504b カメラ
51 授受対象者認識部
52 物体認識部
53 授受行動計画部
54 学習情報記憶部
55 物理インタラクション実行部
56 把持情報取得部
57 レスポンス時間計測部
58 感情マップ生成部
59 授受行動評価部
60 ノード(ニューロン)
61 エッジ
71 把持動作
72 リリース動作
73 α×把持動作+(1-α)×リリース動作
74 把持力制御
75 負荷計算
76 ブレンド率
77 アーム動作制御
78 全身動作制御
81 筐体
82 変形部
83 ビジョンセンサ
821 粘弾性体
822 マーカ
823 初期滑り
91 圧力分布センサ
A1 授受方向
B1 物体
H1 手
P1 床
R1 領域
R2 接触領域
Claims (20)
- マニピュレータに把持された物体を授受対象者へ受け渡す際、前記物体の移動速度が連続性を保つように、前記マニピュレータを制御する制御部を備える情報処理装置。
- 前記制御部は、前記物体の授受方向への前記移動速度が連続性を保つように、前記マニピュレータの把持力の単位時間あたりの変化量を制御する請求項1に記載の情報処理装置。
- 前記マニピュレータは、前記物体を把持する手部と、一方の端に前記手部が取り付けられたアーム部とを備え、
前記制御部は、前記物体の授受方向への前記移動速度が連続性を保つように、前記手部の把持力の単位時間あたりの変化量を制御するとともに、前記アーム部の姿勢の単位時間あたりの変化量を制御する
請求項1に記載の情報処理装置。 - 前記マニピュレータは、移動可能な移動体に取り付けられ、
前記制御部は、前記物体の移動速度が連続性を保つように、前記マニピュレータを制御するとともに、前記移動体の移動を制御する
請求項1に記載の情報処理装置。 - 前記物体が前記マニピュレータと接する部分における前記物体の滑り量を検出する第1検出部をさらに備え、
前記制御部は、前記第1検出部で検出された滑り量に基づいて、前記物体の前記移動速度が連続性を保つように、前記マニピュレータを制御する
請求項1に記載の情報処理装置。 - 前記物体が前記マニピュレータと接触する部分における前記物体の滑り量又は初期滑り量を検出する第2検出部をさらに備え、
前記制御部は、前記第2検出部で検出された前記滑り量又は前記初期滑り量の変化が重力方向とは異なる方向の成分を含む場合、前記物体を前記授受対象者へ受け渡す動作を前記マニピュレータに開始させる
請求項1に記載の情報処理装置。 - 前記重力方向とは異なる方向は、重力と反対方向、重力による回転モーメントと反対方向、又は、前記授受対象者への授受方向である請求項6に記載の情報処理装置。
- 前記物体が前記マニピュレータと接触する部分における前記物体の重力方向の滑り量又は初期滑り量を検出する第3検出部をさらに備え、
前記制御部は、前記第3検出部で検出された前記重力方向の前記滑り量又は前記初期滑り量に基づいて前記マニピュレータを制御する
請求項1に記載の情報処理装置。 - 前記物体を前記授受対象者へ受け渡す動作中の前記授受対象者の感情を認識する対象者認識部をさらに備え、
前記制御部は、前記対象者認識部で検出された前記授受対象者の感情の変化に基づいて、前記物体を前記授受対象者へ受け渡す動作を停止又は終了する
請求項1に記載の情報処理装置。 - 前記対象者認識部は、前記授受対象者を撮像した画像データ及び前記授受対象者が発した声を集音した音声データのうちの少なくとも1つに基づいて、前記授受対象者の感情を認識する請求項9に記載の情報処理装置。
- 前記物体を前記授受対象者へ受け渡すために前記マニピュレータに実行させる授受動作を計画する計画部をさらに備え、
前記制御部は、前記計画部で計画された前記授受動作に従って、前記物体の前記移動速度が連続性を保つように、前記マニピュレータを制御する
請求項1に記載の情報処理装置。 - 前記物体の特性を認識又は推定する物体認識部をさらに備え、
前記計画部は、前記物体認識部で認識又は推定された前記物体の特性に基づいて、前記授受動作を計画する
請求項11に記載の情報処理装置。 - 前記物体の特性は、静止摩擦係数、動摩擦係数、質量、形状寸法、剛性、強度、温度及び湿度のうちの少なくとも1つを含む請求項12に記載の情報処理装置。
- 前記計画部は、学習済みモデルに従って前記授受動作を計画する請求項11に記載の情報処理装置。
- 前記学習済みモデルは、前記マニピュレータが前記物体を把持する把持力の変化率を入力とし、前記物体の授受方向への移動速度の連続性を出力とした機械学習にて作成されたモデルである請求項14に記載の情報処理装置。
- 前記機械学習は、前記物体の授受方向への前記移動速度の連続性が高い程、プラスの報酬が設定され、前記授受方向への前記移動速度の連続性が低い程、マイナスの報酬が設定された機械学習である請求項15に記載の情報処理装置。
- 前記第1検出部は、粘弾性体及びビジョンセンサ、圧力分布センサ及び測距センサのうちの少なくとも1つを含む請求項5に記載の情報処理装置。
- 前記第2検出部は、粘弾性体及びビジョンセンサ、又は、圧力分布センサを含む請求項6に記載の情報処理装置。
- マニピュレータに把持された物体を授受対象者へ受け渡す際、前記物体の移動速度が連続性を保つように、前記マニピュレータを制御する制御方法。
- マニピュレータを制御するコンピュータを機能させるためのプログラムであって、
前記マニピュレータに把持された物体を授受対象者へ受け渡す際、前記物体の移動速度が連続性を保つように、前記マニピュレータを制御することを前記コンピュータに実行させるためのプログラム。
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201980064044.1A CN112770876A (zh) | 2018-10-05 | 2019-09-12 | 信息处理装置、控制方法和程序 |
KR1020217007107A KR102716734B1 (ko) | 2018-10-05 | 2019-09-12 | 정보 처리 장치, 제어 방법 및 프로그램 |
US17/279,990 US12128560B2 (en) | 2018-10-05 | 2019-09-12 | Information processing device, control method, and program |
EP19869651.0A EP3862148A4 (en) | 2018-10-05 | 2019-09-12 | INFORMATION PROCESSING DEVICE, CONTROL METHOD AND PROGRAM |
JP2020550247A JP7318655B2 (ja) | 2018-10-05 | 2019-09-12 | 情報処理装置、制御方法及びプログラム |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2018190285 | 2018-10-05 | ||
JP2018-190285 | 2018-10-05 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020071080A1 true WO2020071080A1 (ja) | 2020-04-09 |
Family
ID=70055156
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2019/035825 WO2020071080A1 (ja) | 2018-10-05 | 2019-09-12 | 情報処理装置、制御方法及びプログラム |
Country Status (6)
Country | Link |
---|---|
US (1) | US12128560B2 (ja) |
EP (1) | EP3862148A4 (ja) |
JP (1) | JP7318655B2 (ja) |
KR (1) | KR102716734B1 (ja) |
CN (1) | CN112770876A (ja) |
WO (1) | WO2020071080A1 (ja) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP4082728A3 (de) * | 2021-04-09 | 2023-02-22 | STILL GmbH | Mobiler kommissionierroboter |
WO2024053204A1 (ja) * | 2022-09-09 | 2024-03-14 | 東京ロボティクス株式会社 | モバイルマニピュレータ及びその制御方法、プログラム |
EP4245483A4 (en) * | 2020-11-10 | 2024-04-24 | Sony Group Corporation | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2021049597A (ja) * | 2019-09-24 | 2021-04-01 | ソニー株式会社 | 情報処理装置、情報処理システム及び情報処理方法 |
KR20220115945A (ko) * | 2019-12-16 | 2022-08-19 | 고쿠리츠다이가쿠호진 도호쿠다이가쿠 | 파지 장치, 제어 방법 및 프로그램 |
JP7458818B2 (ja) * | 2020-02-21 | 2024-04-01 | キヤノン株式会社 | ロボット装置、インタフェース装置、制御装置、エンドエフェクタ、制御方法、ロボット装置を用いた物品の製造方法、プログラム及び記録媒体 |
GB2592411B (en) * | 2020-02-27 | 2022-08-17 | Dyson Technology Ltd | Force sensing device |
WO2021200743A1 (ja) * | 2020-04-02 | 2021-10-07 | ファナック株式会社 | ロボットの教示位置を修正するための装置、教示装置、ロボットシステム、教示位置修正方法、及びコンピュータプログラム |
WO2023076694A1 (en) * | 2021-11-01 | 2023-05-04 | Alpha Reactor Corporation | Robotic assistance device using reduction of cognitive load of a user |
CN115635482B (zh) * | 2022-10-18 | 2024-01-30 | 深圳市人工智能与机器人研究院 | 基于视觉的机器人到人物体传递方法、装置、介质及终端 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009012132A (ja) * | 2007-07-05 | 2009-01-22 | Denso Wave Inc | 多関節型ロボットおよびワーク受け渡し方法 |
JP2013111737A (ja) | 2011-12-01 | 2013-06-10 | Sony Corp | ロボット装置及びその制御方法、並びにコンピューター・プログラム |
JP2013184273A (ja) * | 2012-03-09 | 2013-09-19 | Sony Corp | ロボット装置及びロボット装置の制御方法、並びにコンピューター・プログラム |
Family Cites Families (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7443115B2 (en) * | 2002-10-29 | 2008-10-28 | Matsushita Electric Industrial Co., Ltd. | Apparatus and method for robot handling control |
CN1984756B (zh) * | 2004-07-13 | 2011-12-07 | 松下电器产业株式会社 | 物品保持系统、机器人以及机器人控制方法 |
JP4718987B2 (ja) * | 2005-12-12 | 2011-07-06 | 本田技研工業株式会社 | インターフェース装置およびそれを備えた移動ロボット |
JP4456561B2 (ja) * | 2005-12-12 | 2010-04-28 | 本田技研工業株式会社 | 自律移動ロボット |
JP5013270B2 (ja) * | 2006-02-02 | 2012-08-29 | 株式会社安川電機 | ロボットシステム |
JP2008188722A (ja) * | 2007-02-06 | 2008-08-21 | Fanuc Ltd | ロボット制御装置 |
KR101358477B1 (ko) * | 2008-04-02 | 2014-02-05 | 아이로보트 코퍼레이션 | 로보틱스 시스템 |
KR101687626B1 (ko) * | 2010-01-06 | 2016-12-21 | 삼성전자주식회사 | 로봇 및 그 제어방법 |
US10518409B2 (en) * | 2014-09-02 | 2019-12-31 | Mark Oleynik | Robotic manipulation methods and systems for executing a domain-specific application in an instrumented environment with electronic minimanipulation libraries |
CN205058045U (zh) * | 2015-10-26 | 2016-03-02 | 众德迪克科技(北京)有限公司 | 一种带有视觉伺服系统的机器人 |
KR101820241B1 (ko) * | 2016-02-29 | 2018-01-18 | 울산대학교 산학협력단 | 그리퍼를 이용한 물체의 움직임 추정 장치 및 그 방법 |
JP6726388B2 (ja) * | 2016-03-16 | 2020-07-22 | 富士ゼロックス株式会社 | ロボット制御システム |
US20180021949A1 (en) * | 2016-07-20 | 2018-01-25 | Canon Kabushiki Kaisha | Robot apparatus, robot controlling method, program, and recording medium |
US10682774B2 (en) * | 2017-12-12 | 2020-06-16 | X Development Llc | Sensorized robotic gripping device |
CN108248845A (zh) | 2018-01-31 | 2018-07-06 | 湖南大学 | 一种基于动态重心补偿的旋翼飞行机械臂系统及算法 |
CN108297068A (zh) * | 2018-04-11 | 2018-07-20 | 南京理工大学 | 一种基于力反馈主从控制的带电作业机器人专用工具更换方法 |
US10967507B2 (en) * | 2018-05-02 | 2021-04-06 | X Development Llc | Positioning a robot sensor for object classification |
US10471591B1 (en) * | 2018-06-01 | 2019-11-12 | X Development Llc | Object hand-over between robot and actor |
JP6916157B2 (ja) * | 2018-10-23 | 2021-08-11 | ファナック株式会社 | 人と協働作業を行うロボットシステム、及びロボット制御方法 |
-
2019
- 2019-09-12 US US17/279,990 patent/US12128560B2/en active Active
- 2019-09-12 JP JP2020550247A patent/JP7318655B2/ja active Active
- 2019-09-12 KR KR1020217007107A patent/KR102716734B1/ko active IP Right Grant
- 2019-09-12 EP EP19869651.0A patent/EP3862148A4/en active Pending
- 2019-09-12 WO PCT/JP2019/035825 patent/WO2020071080A1/ja active Application Filing
- 2019-09-12 CN CN201980064044.1A patent/CN112770876A/zh active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009012132A (ja) * | 2007-07-05 | 2009-01-22 | Denso Wave Inc | 多関節型ロボットおよびワーク受け渡し方法 |
JP2013111737A (ja) | 2011-12-01 | 2013-06-10 | Sony Corp | ロボット装置及びその制御方法、並びにコンピューター・プログラム |
JP2013184273A (ja) * | 2012-03-09 | 2013-09-19 | Sony Corp | ロボット装置及びロボット装置の制御方法、並びにコンピューター・プログラム |
Non-Patent Citations (1)
Title |
---|
See also references of EP3862148A4 |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP4245483A4 (en) * | 2020-11-10 | 2024-04-24 | Sony Group Corporation | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM |
EP4082728A3 (de) * | 2021-04-09 | 2023-02-22 | STILL GmbH | Mobiler kommissionierroboter |
WO2024053204A1 (ja) * | 2022-09-09 | 2024-03-14 | 東京ロボティクス株式会社 | モバイルマニピュレータ及びその制御方法、プログラム |
Also Published As
Publication number | Publication date |
---|---|
US12128560B2 (en) | 2024-10-29 |
KR20210069041A (ko) | 2021-06-10 |
KR102716734B1 (ko) | 2024-10-15 |
JP7318655B2 (ja) | 2023-08-01 |
EP3862148A4 (en) | 2021-12-22 |
EP3862148A1 (en) | 2021-08-11 |
JPWO2020071080A1 (ja) | 2021-09-02 |
US20210394362A1 (en) | 2021-12-23 |
CN112770876A (zh) | 2021-05-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020071080A1 (ja) | 情報処理装置、制御方法及びプログラム | |
Asfour et al. | Armar-6: A collaborative humanoid robot for industrial environments | |
US11072068B2 (en) | Robot apparatus and method of controlling robot apparatus | |
Yuan et al. | Design and control of roller grasper v2 for in-hand manipulation | |
US8483877B2 (en) | Workspace safe operation of a force- or impedance-controlled robot | |
Nemlekar et al. | Object transfer point estimation for fluent human-robot handovers | |
Bohez et al. | Sensor fusion for robot control through deep reinforcement learning | |
Felip et al. | Manipulation primitives: A paradigm for abstraction and execution of grasping and manipulation tasks | |
WO2022039058A1 (ja) | 情報処理装置、情報処理方法、およびプログラム | |
Devine et al. | Real time robotic arm control using hand gestures with multiple end effectors | |
JP2003266349A (ja) | 位置認識方法、その装置、そのプログラム、その記録媒体及び位置認識装置搭載型ロボット装置 | |
JP2007222951A (ja) | ロボット装置 | |
Park et al. | A whole-body integrated AVATAR system: Implementation of telepresence with intuitive control and immersive feedback | |
Falck et al. | DE VITO: A dual-arm, high degree-of-freedom, lightweight, inexpensive, passive upper-limb exoskeleton for robot teleoperation | |
Chen et al. | Human-aided robotic grasping | |
Jin et al. | Minimal grasper: A practical robotic grasper with robust performance for pick-and-place tasks | |
US20220355490A1 (en) | Control device, control method, and program | |
SaLoutos et al. | Towards robust autonomous grasping with reflexes using high-bandwidth sensing and actuation | |
Musić et al. | Robot team teleoperation for cooperative manipulation using wearable haptics | |
JP2005088175A (ja) | ロボット装置及びロボット装置の動作制御方法 | |
WO2022102403A1 (ja) | 情報処理装置、情報処理方法、およびプログラム | |
KR20120048107A (ko) | 로봇의 양 팔을 이용해 물체를 잡기 위한 로봇의 동작 제어 시스템 및 제어 방법 | |
Haschke | Grasping and manipulation of unknown objects based on visual and tactile feedback | |
CN110877335A (zh) | 一种基于混合滤波器自适应无标记机械臂轨迹跟踪方法 | |
Twardon et al. | Exploiting eye-hand coordination: A novel approach to remote manipulation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19869651 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2020550247 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2019869651 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2019869651 Country of ref document: EP Effective date: 20210506 |