Disclosure of Invention
The embodiment of the application provides a visual SLAM method, a device, a robot and a storage medium, which can solve the problem of low accuracy of visual SLAM mapping and positioning caused by weak texture of the visual SLAM, such as low matching success degree or poor precision and other factors caused by insufficient number of feature points in the surrounding environment.
In a first aspect, an embodiment of the present application provides a visual SLAM method, including:
acquiring a current image, and determining a characteristic line of a preset type target object from the current image;
acquiring adjacent images, and determining a historical characteristic line of the preset type target object from the adjacent images; matching the characteristic line of the preset type target object with the historical characteristic line;
and when the matching is successful, determining the current pose according to the characteristic line of the preset type target object and the historical characteristic line, and constructing a map according to the current pose.
Optionally, the determining a feature line of a preset type of target object from the current image includes:
extracting each line segment in the image from the current image;
determining a target object with at least two line segments intersected at one point according to the connection condition of each line segment;
and when the target object conforms to a preset type, taking a line segment corresponding to the preset type target object as a characteristic line of the preset type target object.
Optionally, the matching processing of the feature line of the preset type target object and the historical feature line includes:
determining the characteristic value of the historical characteristic line and the characteristic value of the characteristic line of the preset type target object;
calculating a first error value between the characteristic value of the historical characteristic line and the characteristic value of the characteristic line of the preset type target object;
and when the first error value is smaller than a preset first threshold value, the matching is successful.
Optionally, the matching the characteristic line of the preset type target object with the historical characteristic line further includes:
determining the characteristic points of the preset type target object according to the characteristic lines of the preset type target object;
determining historical characteristic points of the preset type target object according to the historical characteristic lines;
calculating a second error value between the position of the historical characteristic point and the position of the characteristic point of the preset type target object;
and when the second error value is smaller than a preset second threshold value, the matching is successful.
Optionally, the determining the feature point of the preset type object according to the feature line of the preset type object includes:
determining an intersection point of the feature lines of the preset type target object;
and taking the intersection point as a characteristic point of the preset type target object.
Optionally, the determining the current pose according to the feature line of the preset type target object and the historical feature line includes:
acquiring a sampling point set of the feature line of the preset type target object and the historical feature line at the same position;
calculating the sampling point set, the historical characteristic points and the characteristic points of the preset type target object according to a preset algorithm, and determining a rotation matrix and a translation matrix between the adjacent image and the current image;
and determining the current pose according to the rotation matrix and the translation matrix.
Optionally, the determining a current pose according to the feature line of the preset type target object and the historical feature line further includes:
acquiring at least three pairs of sampling points at the same positions of the characteristic line of the preset type target object and the historical characteristic line;
calculating the at least three pairs of sampling points according to a preset algorithm, and determining a rotation matrix and a translation matrix between the adjacent image and the current image;
and determining the current pose according to the rotation matrix and the translation matrix.
In a second aspect, an embodiment of the present application provides a visual SLAM device, including:
the acquisition module is used for acquiring a current image and determining a characteristic line of a preset type target object from the current image;
the matching module is used for acquiring adjacent images and determining a historical characteristic line of the preset type target object from the adjacent images; matching the characteristic line of the preset type target object with the historical characteristic line;
and the construction module is used for determining the current pose according to the characteristic line after the matching is successful, and constructing a map according to the current pose.
In a third aspect, embodiments of the present application provide a robot, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of any one of the above-mentioned visual SLAM methods when executing the computer program.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the steps of any one of the above-mentioned visual SLAM methods.
In a fifth aspect, embodiments of the present application provide a computer program product, which, when run on a robot, causes the robot to perform any one of the above-mentioned visual SLAM methods of the first aspect.
The method comprises the steps of obtaining a current image, determining a characteristic line of a preset type target object from the current image, and determining a characteristic line for comparison in the current environment of the robot; acquiring adjacent images, and determining a historical characteristic line of the preset type target object from the adjacent images; and matching the characteristic line of the preset type target object with the historical characteristic line so as to judge whether the current acquired characteristic line of the robot is the same as the historical characteristic line of the preset type target object, and after the matching is successful, determining the current pose according to the characteristic line of the preset type target object and the historical characteristic line, so that the pose of the current robot is further accurately obtained according to the current environment, and a map is constructed according to the current pose, so that the accuracy and robustness of visual SLAM mapping and positioning are improved.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Fig. 1 is a schematic flow chart of a visual SLAM method in an embodiment of the present application, where an execution subject of the method may be a robot, and as shown in fig. 1, the visual SLAM method may include the following steps:
step S101, obtaining a current image, and determining a characteristic line of a preset type target object from the current image.
In this embodiment, if the visual SLAM only considers feature point information in a natural environment, the accuracy of mapping and positioning of the visual SLAM will be low, so that a current image can be acquired by using a camera mounted on the robot, and the robot determines a feature line of a preset type of target object from the current image, so that the robot compares the features of the special type of target object in the current environment, and the current pose of the robot is accurately judged. As shown in fig. 2, if the current target object is a corner, the three line segments forming the corner are feature lines of the corner, and the intersection point between the feature lines may be a feature point of the corner. The preset type of target object may be a right-angle type of target object, that is, the target object has a right-angle feature.
Optionally, the camera may be installed at a front section of the robot and has a certain inclination angle, at least one camera may be installed on the robot, and the camera includes at least one of a monocular camera and a binocular camera for sensing the surrounding environment, although there are many sensors for sensing the surrounding environment, the camera may be an important research content in the field by virtue of its advantages of low price, small size, convenience in installation, etc.
Optionally, step S101 includes:
and extracting each line segment in the image from the current image.
And determining a target object of which at least two line segments intersect at one point according to the connection condition of each line segment.
And when the target object conforms to a preset type, taking a line segment corresponding to the preset type target object as a characteristic line of the preset type target object.
In this embodiment, the robot extracts each line segment in the image from the acquired image of the current environment, determines a target object where at least two line segments intersect at one point according to the connection status of each line segment, and determines the type of the target object, and when the type of the target object meets a preset type, takes at least two line segments of the target object obtained in the above method as the feature line in this embodiment.
S102, acquiring adjacent images, and determining a historical characteristic line of the preset type target object from the adjacent images; and matching the characteristic line of the preset type target object with the historical characteristic line.
In this embodiment, after the robot acquires the current image, an adjacent image at an adjacent time, that is, the previous image, is further acquired, so that in order to compare the two images, a historical feature line of the preset type target object can be determined from the adjacent image, so that the historical feature line is matched with a feature line of the preset type target object obtained in the current image, and whether the current image acquired by the current robot and the target object in the adjacent image can be matched is determined, and the robot further and accurately determines the current pose according to the target object in the two images.
Optionally, after the adjacent images are acquired, tracking processing may be performed according to the feature value of the feature line of the preset type target object determined from the current image, so as to determine the historical feature line in the adjacent images.
Optionally, as shown in fig. 3, step S102 includes:
and S301, determining the characteristic value of the historical characteristic line and the characteristic value of the characteristic line of the preset type target object.
Step S302, calculating a first error value between the characteristic value of the historical characteristic line and the characteristic value of the characteristic line of the preset type target object.
Step S303, when the first error value is smaller than a preset first threshold, the matching is successful.
In this embodiment, the robot may select a preset number of sampling points from the historical characteristic line of a preset type of target object in adjacent images, determine the position coordinates of the sampling points, and calculate the mean value of the sampling points according to the position coordinates to obtain the characteristic value of the historical characteristic line; and selecting a preset number of sampling points from the positions, identical to the sampling points in the selected historical characteristic line, of the characteristic line of the preset type target object in the current image, determining the position coordinates of the sampling points, calculating the mean value of the sampling points according to the position coordinates to obtain the characteristic value of the characteristic line of the preset type target object, calculating the difference value of the characteristic line of the preset type target object and the position coordinates to obtain a first error value, comparing the first error value and the characteristic value to obtain a first error value, and when the first error value is smaller than a first threshold value preset in the robot, indicating that the first error value and the characteristic. The first threshold may be set according to the angular resolution of the camera, the frame rate, and the motion speed of the robot, but is not limited thereto, and for example, if the current environment moves at 30 frame rate, 1080p, 120 degree field angle, and 30cm/s of the robot, the first threshold may be set to 3.6 pixels.
Optionally, as shown in fig. 4, step S102 further includes:
step S401, determining the characteristic points of the preset type target object according to the characteristic lines of the preset type target object.
Optionally, step S401 includes:
determining an intersection point of the feature lines of the preset type target object; and taking the intersection point as a characteristic point of the preset type target object.
Optionally, a point at a preset position around the feature line may also be used as a feature point of the preset type of target object.
And S402, determining the historical characteristic points of the preset type target object according to the historical characteristic lines.
And S403, calculating a second error value between the position of the historical characteristic point and the position of the characteristic point of the preset type target object.
And S404, when the second error value is smaller than a preset second threshold value, the matching is successful.
In this embodiment, the robot may determine the historical feature points of the preset type target object from the historical feature line and the feature points of the preset type target object from the feature line of the preset type target object in the current image by the feature point selection means, obtain a second error value by calculating a difference between positions of the two feature points, and compare the second error value with the second error value, where the second error value is smaller than a second threshold preset in the robot, indicating that the two are successfully matched. The second threshold may be set according to the angular resolution of the camera, the frame rate, and the moving speed of the robot, which is not limited herein, for example, if the current environment is a 30 frame rate, 1080p, 120 degree field angle, and a 30cm/s speed of the robot, the second threshold may be set to 9 pixels; the feature point may be an intersection of feature lines, as shown in fig. 2.
Optionally, when there are at least two preset types of objects, the relative position relationship between the at least two preset types of objects may be determined, and the auxiliary registration is performed during the matching process according to the relative position relationship between the at least two preset types of objects, so as to improve the matching speed. The relative position relationship of the at least two preset type objects can be determined by the relative position of the intersection point of the characteristic lines of the at least two preset type objects.
Optionally, the feature points may be selected by randomly setting at least two points on a fixed object in an operation scene of the robot as the feature points, so that the robot performs matching processing according to the feature points appearing in the image.
And S103, when the matching is successful, determining the current pose according to the feature line of the preset type target object and the historical feature line, and constructing a map according to the current pose.
In this embodiment, when matching is successful, it is described that the preset type of target object in the current image of the robot is the same as the preset type of target object in the adjacent image, and the current pose of the robot can be calculated in combination because the preset type of target object in the current image meets the set threshold condition, so that the robot can calculate in combination with the historical feature lines in the adjacent image according to the feature lines of the preset type of target object in the current image, obtain the movement mode of the robot in the process of moving from the time of acquiring the adjacent image to the time of acquiring the current image, obtain the pose of the robot at the current time, and further construct a map with the current pose as a reference.
Optionally, step S103 includes:
and acquiring a sampling point set of the feature line of the preset type target object at the same position as the historical feature line.
And calculating the sampling point set, the historical characteristic points and the characteristic points of the preset type target object according to a preset algorithm, and determining a rotation matrix and a translation matrix between the adjacent image and the current image.
And determining the current pose according to the rotation matrix and the translation matrix.
In this embodiment, the robot may determine the current pose of the robot according to a combination manner of a feature line and a feature point in a preset type target object, the robot may determine a rotation matrix and a translation matrix between an adjacent image and the current image by acquiring a sampling point at the same position of the feature line of the preset type target object as the historical feature line, and the obtained historical feature point and the feature point of the preset type target object, and calculate the acquired data point by using a preset algorithm, and may obtain a movement manner of the robot in a process from the time of acquiring the adjacent image to the time of acquiring the current image according to the rotation matrix and the translation matrix, that is, may obtain the pose of the current time according to the pose of the adjacent image by using the rotation matrix and the translation matrix.
Optionally, a set of sampling points at positions close to the feature line of the preset type target object and the historical feature line may be selected for calculation, where the sampling points at the close positions may be sampling points at positions having a position difference smaller than or equal to a preset position threshold value, and the position difference is respectively selected from the feature line of the preset type target object and the historical feature line.
It is understood that the sampling point set obtained by the feature line may be obtained by performing a mean calculation on points in the point set to obtain points representing the feature line of the preset type of object and points representing the historical feature line. For example, a point a, a point B, and a point C are selected from a feature line of a preset type target object, and after calculation, a point representing the feature line is (a + B + C)/2; similarly, three points at corresponding positions are selected from the historical feature line, and the calculation is also performed, so that the points representing the historical feature line can be obtained.
By way of specific example and not limitation, if the current image includes a feature point P preset in a real scene1、P2、P3And a characteristic line L; the robot passes the acquisition of the current imageA and the adjacent image B will get P under the current image A1′、P2′、P3'and a point L' representing a characteristic line; and P of B under adjacent pictures1″、P2″、P3"and a point L representing a characteristic line"; the points representing the characteristic line L can be obtained by selecting a preset number of sampling points on the characteristic line to average. Calculating the obtained data according to a preset algorithm to obtain a formula:
P′1=K*P1;P″1=K*(R*P1+T);
P′2=K*P2;P″2=K*(R*P2+T);
P′3=K*P3;P″3=K*(R*P3+T);
L′=K*L;L″=K*(R*L+T);
wherein K is a camera internal reference matrix preset in the robot in advance; the motion mode (R, T) of the robot can be deduced according to the formula, wherein R is a rotation matrix, and T is a translation matrix. According to the rotation matrix and the translation matrix, the movement mode of the robot in the process from the moment of acquiring the adjacent images to the moment of acquiring the current images can be obtained, and the pose of the robot at the current moment can be obtained through the rotation matrix and the translation matrix according to the pose of the adjacent images.
Optionally, step S103 further includes:
and acquiring at least three pairs of sampling points at the same positions of the characteristic line of the preset type target object and the historical characteristic line.
And calculating the at least three pairs of sampling points according to a preset algorithm, and determining a rotation matrix and a translation matrix between the adjacent image and the current image.
And determining the current pose according to the rotation matrix and the translation matrix.
In this embodiment, the robot may further determine the current pose of the robot according to the feature line, the robot may determine a rotation matrix and a translation matrix between the adjacent image and the current image by acquiring at least three pairs of sampling points at the same positions of the feature line of the preset type target object and the historical feature line, and calculating the acquired at least three pairs of sampling points by using the algorithm consistent with the above, and may obtain a movement manner of the robot in a process from the time of acquiring the adjacent image to the time of acquiring the current image according to the rotation matrix and the translation matrix, that is, may obtain the pose of the current time according to the pose of the adjacent image through the rotation matrix and the translation matrix.
The method comprises the steps of obtaining a current image, determining a characteristic line of a preset type target object from the current image, and determining a characteristic line for comparison in the current environment of the robot; acquiring adjacent images, and determining a historical characteristic line of the preset type target object from the adjacent images; and matching the characteristic line of the preset type target object with the historical characteristic line so as to judge whether the current acquired characteristic line of the robot is the same as the historical characteristic line of the preset type target object, and after the matching is successful, determining the current pose according to the characteristic line of the preset type target object and the historical characteristic line, so that the pose of the current robot is further accurately obtained according to the current environment, and a map is constructed according to the current pose, so that the accuracy and robustness of visual SLAM mapping and positioning are improved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Corresponding to the above-described visual SLAM method, fig. 5 is a schematic structural diagram of a visual SLAM device in an embodiment of the present application, and as shown in fig. 5, the visual SLAM device may include:
an obtaining module 501, configured to obtain a current image, and determine a feature line of a preset type target object from the current image.
A matching module 502, configured to obtain adjacent images, and determine a historical characteristic line of the preset type target object from the adjacent images; and matching the characteristic line of the preset type target object with the historical characteristic line.
And a building module 503, configured to determine a current pose according to the feature line after matching is successful, and build a map according to the current pose.
Optionally, the obtaining module 501 may include:
and the extracting unit is used for extracting each line segment in the image from the current image.
The first determining unit is used for determining a target object of which at least two line segments intersect at one point according to the connection condition of each line segment.
And the characteristic line unit is used for taking the line segment corresponding to the preset type target object as the characteristic line of the preset type target object when the target object conforms to the preset type.
Optionally, the matching module 502 may include:
and the second determining unit is used for determining the characteristic value of the historical characteristic line and the characteristic value of the characteristic line of the preset type target object.
The first calculating unit is used for calculating a first error value between the characteristic value of the historical characteristic line and the characteristic value of the characteristic line of the preset type target object.
And the first matching unit is used for matching successfully when the first error value is smaller than a preset first threshold value.
Optionally, the matching module 502 may further include:
and the third determining unit is used for determining the characteristic point of the preset type target object according to the characteristic line of the preset type target object.
And the fourth determining unit is used for determining the historical characteristic points of the preset type target object according to the historical characteristic lines.
And the second calculation unit is used for calculating a second error value between the position of the historical characteristic point and the position of the characteristic point of the preset type target object.
And the second matching unit is used for matching successfully when the second error value is smaller than a preset second threshold value.
Optionally, the second determining unit may include:
and the intersection point determining subunit is used for determining the intersection points of the intersection between the characteristic lines of the preset type target objects.
And the feature point subunit is used for taking the intersection point as a feature point of the preset type target object.
Optionally, the building module 503 may include:
and the first acquisition unit is used for acquiring a sampling point set at the same position of the characteristic line of the preset type target object and the historical characteristic line.
And the third calculating unit is used for calculating the sampling point set, the historical characteristic points and the characteristic points of the preset type target object according to a preset algorithm and determining a rotation matrix and a translation matrix between the adjacent image and the current image.
And the fifth determining unit is used for determining the current pose according to the rotation matrix and the translation matrix.
Optionally, the building module 503 may further include:
and the second acquisition unit is used for acquiring at least three pairs of sampling points at the same positions of the characteristic line of the preset type target object and the historical characteristic line.
And the fourth calculating unit is used for calculating the at least three pairs of sampling points according to a preset algorithm and determining a rotation matrix and a translation matrix between the adjacent image and the current image.
And the sixth determining unit is used for determining the current pose according to the rotation matrix and the translation matrix.
The method comprises the steps of obtaining a current image, determining a characteristic line of a preset type target object from the current image, and determining a characteristic line for comparison in the current environment of the robot; acquiring adjacent images, and determining a historical characteristic line of the preset type target object from the adjacent images; and matching the characteristic line of the preset type target object with the historical characteristic line so as to judge whether the current acquired characteristic line of the robot is the same as the historical characteristic line of the preset type target object, and after the matching is successful, determining the current pose according to the characteristic line of the preset type target object and the historical characteristic line, so that the pose of the current robot is further accurately obtained according to the current environment, and a map is constructed according to the current pose, so that the accuracy and robustness of visual SLAM mapping and positioning are improved.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the apparatus and the module described above may refer to corresponding processes in the foregoing system embodiments and method embodiments, and are not described herein again.
Fig. 6 is a schematic structural diagram of a robot according to an embodiment of the present application. For convenience of explanation, only portions related to the embodiments of the present application are shown.
As shown in fig. 6, the robot 6 of this embodiment includes: at least one processor 600 (only one shown in fig. 6), a memory 601 connected to the processor 600, and a computer program 602, such as a visual SLAM program, stored in the memory 601 and executable on the at least one processor 600. The processor 600, when executing the computer program 602, implements the steps in the various visual SLAM method embodiments described above, such as steps S101-S103 shown in fig. 1. Alternatively, the processor 600 executes the computer program 602 to implement the functions of the modules in the device embodiments, such as the functions of the modules 501 to 503 shown in fig. 5.
Illustratively, the computer program 602 may be partitioned into one or more modules that are stored in the memory 601 and executed by the processor 600 to accomplish the present application. The one or more modules may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 602 in the robot 6. For example, the computer program 602 may be divided into an obtaining module 501, a matching module 502, and a constructing module 503, and the specific functions of each module are as follows:
an obtaining module 501, configured to obtain a current image, and determine a feature line of a preset type target object from the current image;
a matching module 502, configured to obtain adjacent images, and determine a historical characteristic line of the preset type target object from the adjacent images; matching the characteristic line of the preset type target object with the historical characteristic line;
and a building module 503, configured to determine a current pose according to the feature line after matching is successful, and build a map according to the current pose.
The robot 6 may include, but is not limited to, a processor 600, a memory 601. Those skilled in the art will appreciate that fig. 6 is merely an example of the robot 6, and does not constitute a limitation on the robot 6, and may include more or less components than those shown, or combine certain components, or different components, such as input and output devices, network access devices, buses, etc.
The Processor 600 may be a Central Processing Unit (CPU), and the Processor 600 may be other general-purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, a discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 601 may in some embodiments be an internal storage unit of the robot 6, such as a hard disk or a memory of the robot 6. The memory 601 may also be an external storage device of the robot 6 in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the robot 6. Further, the memory 601 may also include both an internal storage unit and an external storage device of the robot 6. The memory 601 is used for storing an operating system, an application program, a Boot Loader (Boot Loader), data, and other programs, such as program codes of the computer programs. The memory 601 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/robot and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/robot are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing device/robot, a recording medium, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), an electrical carrier wave signal, a telecommunications signal, and a software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.