CN108115671B - Double-arm robot control method and system based on 3D vision sensor - Google Patents
Double-arm robot control method and system based on 3D vision sensor Download PDFInfo
- Publication number
- CN108115671B CN108115671B CN201611060835.1A CN201611060835A CN108115671B CN 108115671 B CN108115671 B CN 108115671B CN 201611060835 A CN201611060835 A CN 201611060835A CN 108115671 B CN108115671 B CN 108115671B
- Authority
- CN
- China
- Prior art keywords
- arm
- skeleton
- image
- vision sensor
- points
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/02—Programme-controlled manipulators characterised by movement of the arms, e.g. cartesian coordinate type
- B25J9/04—Programme-controlled manipulators characterised by movement of the arms, e.g. cartesian coordinate type by rotating at least one arm, excluding the head movement itself, e.g. cylindrical coordinate type or polar coordinate type
- B25J9/046—Revolute coordinate type
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1628—Programme controls characterised by the control loop
- B25J9/1638—Programme controls characterised by the control loop compensation for arm bending/inertia, pay load weight/inertia
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1671—Programme controls characterised by programming, planning systems for manipulators characterised by simulation, either to verify existing program or to create and verify new program, CAD/CAM oriented, graphic oriented programming systems
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1679—Programme controls characterised by the tasks executed
- B25J9/1682—Dual arm manipulator; Coordination of several manipulators
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Manipulator (AREA)
Abstract
The invention relates to the technical field of robot control, and particularly discloses a double-arm robot control method based on a 3D vision sensor, which comprises the following steps: s1, obtaining a depth image of the arm skeleton point through a 3D vision sensor; s2, analyzing the depth image to obtain the position information of the arm skeleton point; s3, acquiring the position relation of skeleton points of front and rear frames in the depth image according to the position information of the skeleton points of the arm in the depth image so as to calculate the rotation angle of the skeleton of the arm; and S4, sending the rotation angle of the arm skeleton to a robot controller, and controlling the robot to execute corresponding actions by the controller. The invention effectively solves the technical effects of various problems existing in the existing control modes of a mouse, a keyboard, a teaching box or wearing equipment and the like.
Description
Technical Field
The invention relates to the technical field of robot control, in particular to a method and a system for controlling a double-arm robot based on a 3D vision sensor.
Background
Most of the existing double-arm robots adopt external equipment such as a mouse, a keyboard, a control rod or a teaching box to control the robots, but the method is very complicated to operate and poor in human-computer interaction. In addition, the use of some wearable devices to control the robot is expensive. Therefore, it is an urgent problem to get rid of the conventional method of controlling a robot through an external device.
Disclosure of Invention
The invention aims to overcome the technical defects of complex operation and poor interactivity of the existing double-arm robot adopting external equipment to control double arms, and provides a double-arm robot control method based on a 3D vision sensor.
In order to achieve the purpose, the invention adopts the following technical scheme:
in one aspect, the invention provides a method for controlling a dual-arm robot based on a 3D vision sensor, which specifically comprises the following steps:
s1, acquiring an image of an arm skeleton point through a 3D vision sensor;
s2, analyzing the image to obtain the position information of the arm skeleton point;
s3, acquiring the position relation of skeleton points of the front frame and the back frame in the image according to the position information of the skeleton points of the arm in the image so as to calculate the rotation angle of the skeleton of the arm;
and S4, sending the rotation angle of the arm skeleton to a robot controller, and controlling the robot to execute the action corresponding to the arm by the controller.
In some embodiments, the 3D vision sensor includes an RGB camera, an infrared emitting device, and a camera for receiving infrared rays.
In some embodiments, the images acquired by the 3D vision sensor include depth images and color images.
In some embodiments, step S2 further includes the steps of:
and extracting the main body shape of the human body from the image by background subtraction, positioning each skeleton point of the human body by utilizing shape information, and obtaining the three-dimensional coordinates of the skeleton points so as to obtain the position information of the arm skeleton points.
In some embodiments, the step S3 includes the steps of:
calculating the rotation angles of the elbow skeleton points with two degrees of freedom according to the position relation of the elbow skeleton points of the previous and next frames; and calculating the rotation angles of the shoulder skeleton points in two degrees of freedom according to the relative position information of the shoulder skeleton points of the front and rear frames.
In addition, the invention also discloses a double-arm robot control system based on the 3D vision sensor, which comprises the following modules:
the 3D vision sensor is used for acquiring an image of an arm skeleton point;
the image analysis module is used for analyzing the image to obtain the position information of the arm skeleton points, and acquiring the position relation of the skeleton points of the front frame and the back frame in the image according to the position information of the arm skeleton points in the image to calculate the rotation angle of the arm skeleton;
and the robot controller is used for receiving the rotation angle information of the arm skeleton and executing the action corresponding to the arm.
In some embodiments, the image analysis module is configured to extract a main body shape of a person from the image by background subtraction, position each bone point of the human body by using shape information, and obtain a three-dimensional coordinate of the bone point, thereby obtaining position information of the arm bone point.
In some embodiments, the image analysis module is further configured to calculate rotation angles of the elbow skeleton points in two degrees of freedom according to the position relationship of the elbow skeleton points of the previous and subsequent frames, and calculate rotation angles of the shoulder skeleton points in two degrees of freedom according to the relative position information of the shoulder skeleton points of the previous and subsequent frames.
In addition, the invention also discloses a double-arm robot based on the 3D vision sensor, and the double-arm robot control system comprises the 3D vision sensor.
The invention has the beneficial effects that:
the method comprises the steps of obtaining a depth image through a 3D vision sensor, and extracting three-dimensional data points of all skeletal points of two arms of a human body through the depth image; and tracking each skeleton point, and calculating the rotation angle of each axis of the robot according to the position of each frame of skeleton point, so that the robot is controlled, and the constraint of the traditional robot control method through external equipment is eliminated.
More particularly, the robot control method based on vision can simply and effectively control the robot in real time, has high human-computer interaction, and achieves the technical effect of being suitable for some high-risk and toxic environments.
Drawings
FIG. 1 is a flow chart of a control method of a two-arm robot based on a 3D vision sensor according to the present invention;
FIG. 2 is a diagram illustrating an embodiment of a 3D vision sensor in a method for controlling a dual-arm robot based on a 3D vision sensor according to the present invention;
FIG. 3 is a diagram of two-arm skeleton points in a two-arm robot control method based on a 3D vision sensor according to the present invention;
fig. 4 is a diagram of each coordinate system of left arm skeleton points in an embodiment of a two-arm robot control method based on a 3D vision sensor.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not to be construed as limiting the invention.
The idea of the invention is as follows: the depth image is obtained by adopting the 3D vision sensor, the obtained depth image is analyzed by the robot vision analyzer to obtain the position information of the skeleton points of the arm of the human body, the motion condition of each frame of the skeleton points in the depth image is calculated and sent to the robot controller to be converted into different actions, and therefore the control of the robot is achieved.
Fig. 1 is a flowchart illustrating a method for controlling a dual-arm robot based on a 3D vision sensor according to the present invention. In this embodiment, the method for controlling the two-arm robot based on the 3D vision sensor is specifically realized by the following steps:
step S1 is executed to acquire an image of the arm bone points through the 3D vision sensor. The acquired images include depth images and color images.
As shown in fig. 2, in one embodiment of the present invention, the 3D vision sensor is composed of an RGB camera, an infrared ray emitting device and a camera for receiving infrared rays. Wherein, the RGB video camera is used for obtaining the color image, and the infrared ray emitter and the infrared ray receiving camera are used for obtaining the depth image. The resolution of the acquired color image and depth image is 640 × 480, and at most 30 frames of images can be taken per second.
And step S2 is executed to analyze the depth image obtained by the 3D vision sensor to obtain the position information of the arm bone point.
As shown in fig. 3, the two arms of the human body of the invention are composed of 7 skeleton nodes, namely a left hand, a left elbow, a left shoulder, a shoulder center, a right shoulder, a right elbow and a right hand. The center of the shoulder is a skeleton base point and is also a father skeleton point of the left shoulder. By analogy, the left shoulder is the left elbow father skeleton point, the left elbow is the left hand father skeleton point, and the right arm is the same. In this step, the body shape of the person is extracted from the depth image by using a background subtraction method, and then each bone point of the human body is positioned by using shape information, and the three-dimensional coordinates of the bone points are obtained, thereby obtaining the position information of the arm bone points. Specifically, taking the left arm as an example, and taking the left shoulder bone point as the origin to establish the left arm base coordinate system as shown in fig. 4, the left shoulder bone point coordinates are (0, 0, 0). And obtaining coordinate points of all the skeletal points of the left arm relative to the base coordinate system of the skeletal points of the left shoulder through the 3D sensor.
And step S3 is executed, the position relation of the skeleton points of the front frame and the back frame in the depth image is obtained according to the position information of the skeleton points of the arm in the depth image, and the rotation angle of the skeleton of the arm is calculated.
Because the rotation angle of each joint is solved by using a kinematic inverse operation mode, the multi-solution condition can occur, and the motion path of each joint is not necessarily the motion path of the actual arm, the rotation condition of each bone node is directly calculated, and each axis of the robot is controlled to rotate at the same angle correspondingly, so that the control of the robot is realized. Calculating the rotation angles of the elbow skeleton points with two degrees of freedom according to the position relation of the elbow skeleton points of the previous and next frames; and calculating the rotation angles of the shoulder skeleton points in two degrees of freedom according to the relative position information of the shoulder skeleton points of the front and rear frames.
Specifically, referring to fig. 4, taking the calculation process of the left arm skeleton point as an example, the left shoulder skeleton point is connected with the left elbow skeleton point to form the upper arm of the left arm, and the upper arm is assumed to be l1. By analyzing the upper arm of the human body, the skeleton point of the left shoulder consists of two degrees of freedom, namely the upper arm l1In the YOZ plane aroundX-axis rotation (corresponding to robot 1 axis) and rotation about the Y-axis in the XOZ plane (corresponding to robot 2 axis). According to the three-dimensional coordinate point (x) of the left elbow skeleton point in the left shoulder skeleton point base coordinate system in each frame of the acquired image1,y1,z1) The upper arms l can be calculated respectively1Rotation angles in the YOZ plane and the XOZ plane. Define the upper arm1Only rotates in the Y positive quadrant range in the YOZ plane, namely the rotation range is +/-90 degrees,
Define the upper arm1Only rotates in the range of X positive quadrant in the XOZ plane, namely the rotating range is +/-90 degrees,
The left elbow skeleton point is connected with the left hand skeleton point to form the lower arm of the left arm, and the lower arm is set as l2. And the lower arm also consists of two degrees of freedom at the left elbow skeletal point. Are each l2At Y1O1Z1Plane around Y1Axis rotation (corresponding to robot 3 axes) and in X1O1Y1Plane around Z1The axis rotates (corresponding to the robot 4 axis). According to the three-dimensional coordinate point (x) of the hand skeleton point in each frame in the shoulder skeleton point base coordinate system2,y2,z2) The lower arm l can be calculated2At Y1O1Z1Plane and X1O1Y1The angle of rotation of the plane. Defining a lower arm l2At Y1O1Z1In plane only at Y1The positive quadrant range is rotated, i.e. the rotation range is +/-90 degrees, soAngle of rotation
Defining a lower arm l2At X1O1Y1In plane X only1The rotation range of the positive quadrant is-45 to 90 degrees according to the actual condition of the lower arm of the human body,
X since the hand and elbow skeletal points must not coincide2≠x1. The angle of rotation of the left arm skeletal point is thus obtained. The calculation of the rotation angle of the right arm skeleton is the same as that of the left arm skeleton, and is not described herein again.
Step S4 is executed to transmit the rotation angle of the arm skeleton to the robot controller, and the controller controls the robot to execute the corresponding operation.
Correspondingly, the invention also discloses a double-arm robot control system based on the 3D vision sensor, which comprises the following modules:
the 3D vision sensor is used for acquiring an image of an arm skeleton point;
and the image analysis module is used for analyzing the image to obtain the position information of the arm skeleton points, and acquiring the position relation of the skeleton points of the front frame and the back frame in the image according to the position information of the arm skeleton points in the image to calculate the rotation angle of the arm skeleton. Specifically, the image analysis module is used for extracting the main body shape of the person from the image through background subtraction, positioning each skeleton point of the human body by using shape information, and obtaining the three-dimensional coordinates of the skeleton points, so as to obtain the position information of the arm skeleton points. The image analysis module is further used for calculating the rotation angles of the elbow skeleton points with two degrees of freedom according to the position relation of the elbow skeleton points of the previous and next frames, and calculating the rotation angles of the shoulder skeleton points with two degrees of freedom according to the relative position information of the shoulder skeleton points of the previous and next frames.
And the robot controller is used for receiving the rotation angle information of the arm skeleton and executing the action corresponding to the arm.
Correspondingly, the invention also correspondingly discloses a double-arm robot based on the 3D vision sensor, and the double-arm robot control system comprises the 3D vision sensor.
The invention provides a double-arm robot control method based on a 3D vision sensor, which utilizes the 3D vision sensor to obtain skeleton points of the arms of a human body, and realizes the control of the double-arm robot through the position change of the skeleton points, thereby achieving the beneficial effect of effectively solving various problems existing in the existing control modes such as a mouse, a keyboard, a teaching box or wearing equipment.
The above-described embodiments of the present invention should not be construed as limiting the scope of the present invention. Any other corresponding changes and modifications made according to the technical idea of the present invention should be included in the protection scope of the claims of the present invention.
Claims (6)
1. A control method of a double-arm robot based on a 3D vision sensor is characterized by comprising the following steps:
s1, acquiring an image of an arm skeleton point through a 3D vision sensor;
s2, analyzing the image to obtain the position information of the arm skeleton point;
s3, acquiring the position relation of skeleton points of the front frame and the back frame in the image according to the position information of the skeleton points of the arm in the image so as to calculate the rotation angle of the skeleton of the arm;
s4, sending the rotation angle of the arm skeleton to a robot controller, wherein the controller controls the robot to execute the action corresponding to the arm;
the 3D vision sensor comprises an RGB camera, an infrared emission device and a camera for receiving infrared rays;
step S2 further includes the steps of:
and extracting the main body shape of the human body from the image by background subtraction, positioning each skeleton point of the human body by utilizing shape information, and obtaining the three-dimensional coordinates of the skeleton points so as to obtain the position information of the arm skeleton points.
2. The method for controlling a 3D vision sensor-based two-arm robot as claimed in claim 1, wherein the image acquired by the 3D vision sensor includes a depth image and a color image.
3. The 3D vision sensor-based dual-arm robot control method of claim 1, wherein the step S3 comprises the steps of:
calculating the rotation angles of the elbow skeleton points with two degrees of freedom according to the position relation of the elbow skeleton points of the previous and next frames;
and calculating the rotation angles of the shoulder skeleton points in two degrees of freedom according to the relative position information of the shoulder skeleton points of the front and rear frames.
4. A two-arm robot control system based on 3D vision sensor is characterized by comprising the following modules:
the 3D vision sensor is used for acquiring an image of an arm skeleton point;
the image analysis module is used for analyzing the image to obtain the position information of the arm skeleton points, and acquiring the position relation of the skeleton points of the front frame and the back frame in the image according to the position information of the arm skeleton points in the image to calculate the rotation angle of the arm skeleton;
the robot controller is used for receiving the rotation angle information of the arm skeleton and executing the action corresponding to the arm;
the image analysis module is used for extracting the main body shape of a person from the image through background subtraction, positioning each skeleton point of the human body by utilizing shape information, and obtaining the three-dimensional coordinates of the skeleton points so as to obtain the position information of the arm skeleton points.
5. The 3D vision sensor-based dual-arm robot control system as claimed in claim 4, wherein the image analysis module is further configured to calculate a rotation angle of two degrees of freedom of an elbow skeleton point based on the positional relationship of the elbow skeleton points of the previous and subsequent frames, and to calculate a rotation angle of two degrees of freedom of a shoulder skeleton point based on the relative positional information of the shoulder skeleton points of the previous and subsequent frames.
6. A two-arm robot based on a 3D vision sensor, characterized by comprising the two-arm robot control system of the 3D vision sensor according to any one of claims 4 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611060835.1A CN108115671B (en) | 2016-11-26 | 2016-11-26 | Double-arm robot control method and system based on 3D vision sensor |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611060835.1A CN108115671B (en) | 2016-11-26 | 2016-11-26 | Double-arm robot control method and system based on 3D vision sensor |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108115671A CN108115671A (en) | 2018-06-05 |
CN108115671B true CN108115671B (en) | 2021-04-20 |
Family
ID=62224540
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611060835.1A Active CN108115671B (en) | 2016-11-26 | 2016-11-26 | Double-arm robot control method and system based on 3D vision sensor |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108115671B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109648570A (en) * | 2018-09-12 | 2019-04-19 | 河南工程学院 | Robotic presentation learning method based on HTCVIVE wearable device |
CN110308747B (en) * | 2019-06-26 | 2022-05-31 | 西南民族大学 | Electronic type full-automatic computer operating device based on machine vision |
CN111002289B (en) * | 2019-11-25 | 2021-08-17 | 华中科技大学 | Robot online teaching method and device, terminal device and storage medium |
CN114474050A (en) * | 2021-12-29 | 2022-05-13 | 北京精密机电控制设备研究所 | Grabbing prediction-based workpiece sorting method of double-arm robot with multiple topological structures |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5423415B2 (en) * | 2010-01-19 | 2014-02-19 | 株式会社安川電機 | Production system |
KR101410245B1 (en) * | 2013-03-22 | 2014-06-20 | 주식회사 로보스타 | Industrial 16-axis robot with dual arms |
CN103170973B (en) * | 2013-03-28 | 2015-03-11 | 上海理工大学 | Man-machine cooperation device and method based on Kinect video camera |
JP6255901B2 (en) * | 2013-10-30 | 2018-01-10 | セイコーエプソン株式会社 | Robot control device, robot and robot system |
WO2015125297A1 (en) * | 2014-02-24 | 2015-08-27 | 日産自動車株式会社 | Local location computation device and local location computation method |
JP6476632B2 (en) * | 2014-07-31 | 2019-03-06 | セイコーエプソン株式会社 | Double arm robot |
CN104385282B (en) * | 2014-08-29 | 2017-04-12 | 暨南大学 | Visual intelligent numerical control system and visual measuring method thereof |
CN104626206B (en) * | 2014-12-17 | 2016-04-20 | 西南科技大学 | The posture information measuring method of robot manipulating task under a kind of non-structure environment |
CN105034001B (en) * | 2015-08-03 | 2017-04-19 | 南京邮电大学 | Biomimetic manipulator synchronous wireless control system and method |
CN205386823U (en) * | 2016-02-05 | 2016-07-20 | 中国科学院自动化研究所 | General modularization both arms service robot platform and system |
-
2016
- 2016-11-26 CN CN201611060835.1A patent/CN108115671B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN108115671A (en) | 2018-06-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111694429B (en) | Virtual object driving method and device, electronic equipment and readable storage | |
Ribo et al. | A new optical tracking system for virtual and augmented reality applications | |
US10825197B2 (en) | Three dimensional position estimation mechanism | |
Zuo et al. | Devo: Depth-event camera visual odometry in challenging conditions | |
CN108115671B (en) | Double-arm robot control method and system based on 3D vision sensor | |
CN107168515A (en) | The localization method and device of handle in a kind of VR all-in-ones | |
CN113505694B (en) | Man-machine interaction method and device based on sight tracking and computer equipment | |
Kim et al. | Real-time rotational motion estimation with contrast maximization over globally aligned events | |
CN108828996A (en) | A kind of the mechanical arm remote control system and method for view-based access control model information | |
US20210200311A1 (en) | Proxy controller suit with optional dual range kinematics | |
CN104656893A (en) | Remote interaction control system and method for physical information space | |
CN113103230A (en) | Human-computer interaction system and method based on remote operation of treatment robot | |
Noh et al. | An HMD-based Mixed Reality System for Avatar-Mediated Remote Collaboration with Bare-hand Interaction. | |
CN113379839A (en) | Ground visual angle monocular vision odometer method based on event camera system | |
CN113386128A (en) | Body potential interaction method for multi-degree-of-freedom robot | |
CN111947650A (en) | Fusion positioning system and method based on optical tracking and inertial tracking | |
TW201310339A (en) | System and method for controlling a robot | |
CN113496168B (en) | Sign language data acquisition method, device and storage medium | |
Otto et al. | Towards ubiquitous tracking: Presenting a scalable, markerless tracking approach using multiple depth cameras | |
Cha et al. | Mobile. Egocentric human body motion reconstruction using only eyeglasses-mounted cameras and a few body-worn inertial sensors | |
Xin et al. | 3D augmented reality teleoperated robot system based on dual vision | |
Becher et al. | VIRTOOAIR: virtual reality toolbox for avatar intelligent reconstruction | |
Zhou et al. | Development of a synchronized human-robot-virtuality interaction system using cooperative robot and motion capture device | |
You et al. | Research and Implementation of Human-Computer Interaction System Based on Human Body Attitude Recognition Algorithm | |
Huang et al. | Design and application of intelligent patrol system based on virtual reality |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |