[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN114140520A - Method and device for determining position information and storage medium - Google Patents

Method and device for determining position information and storage medium Download PDF

Info

Publication number
CN114140520A
CN114140520A CN202111454177.5A CN202111454177A CN114140520A CN 114140520 A CN114140520 A CN 114140520A CN 202111454177 A CN202111454177 A CN 202111454177A CN 114140520 A CN114140520 A CN 114140520A
Authority
CN
China
Prior art keywords
determining
positioning
mark
identifier
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111454177.5A
Other languages
Chinese (zh)
Other versions
CN114140520B (en
Inventor
马元勋
何林
唐旋来
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Keenlon Intelligent Technology Co Ltd
Original Assignee
Shanghai Keenlon Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Keenlon Intelligent Technology Co Ltd filed Critical Shanghai Keenlon Intelligent Technology Co Ltd
Priority to CN202111454177.5A priority Critical patent/CN114140520B/en
Publication of CN114140520A publication Critical patent/CN114140520A/en
Application granted granted Critical
Publication of CN114140520B publication Critical patent/CN114140520B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Position Fixing By Use Of Radio Waves (AREA)
  • Navigation (AREA)

Abstract

The application discloses a method, a device and a storage medium for determining position information, which relate to the technical field of artificial intelligence. Therefore, the method can be applied to application scenes with complex environments. The method comprises the following steps: determining a marking partition to which the marking point belongs based on the position coordinates of the marking point in the positioning label, wherein the marking partition at least comprises a positioning partition and a scene partition; in each marking zone, determining a position identifier represented by a positioning label according to the position coordinates of the marking points, wherein the position identifier at least comprises a positioning identifier and a scene identifier; location information of the mobile device is determined based on the location identity.

Description

Method and device for determining position information and storage medium
Technical Field
The embodiment of the application relates to the technical field of artificial intelligence, in particular to a method and a device for determining position information and a storage medium.
Background
With the development of artificial intelligence technology, the application scenes of the robot are wider and wider. At present, a robot can perform different tasks (such as a transportation task, a shopping guide task or a welcome task) in each application scene instead of manual work. In the prior art, when a robot executes a task, the positioning of the current position needs to be confirmed. Generally, a positioning tag can be attached to a ceiling or a ground of an application scene in advance, and the robot can determine its own position based on information in the acquired positioning tag when moving, so that a navigation path can be planned based on its own position.
However, the existing method for determining the position of the robot based on the positioning tag is only suitable for application scenes with simple environment, and cannot be suitable for application scenes with complex environment.
Disclosure of Invention
Compared with the prior art, the scheme can be suitable for application scenes with complex environments.
In order to achieve the purpose, the technical scheme is as follows:
in a first aspect, the present application provides a method for determining location information, including: determining a marking partition to which the marking point belongs based on the position coordinates of the marking point in the positioning label, wherein the marking partition at least comprises a positioning partition and a scene partition; in each marking zone, determining a position identifier represented by a positioning label according to the position coordinates of the marking points, wherein the position identifier at least comprises a positioning identifier and a scene identifier; location information of the mobile device is determined based on the location identity.
According to the technical scheme, the mark subareas to which the mark points belong can be determined according to the position coordinates of the mark points in the positioning label, different types of position identifications are determined in different mark subareas, and then the position information of the mobile equipment can be determined by integrating the different types of position identifications. It can be seen that, in the technical scheme provided by the application, the positioning tag is divided into at least two types of mark partitions, namely the positioning partition and the scene partition, so that the positioning identifier can be determined according to the position coordinates of each mark point in the positioning partition, and the scene identifier can be determined according to the position coordinates of each mark point in the scene partition. Different scene identifiers can represent different application scenes, so that the current position information of the mobile device can be acquired based on the positioning identifier, and the scene information (such as floor information) of the current position of the mobile device can be acquired based on the scene identifiers. In this way, in some application scenarios with a complex environment, the current location information and the scenario information of the mobile device may be combined to determine a path planning scheme (i.e., the location information in the present application) more suitable for the current application scenario. Therefore, the technical scheme provided by the application can be suitable for application scenes with complex environments.
Optionally, in a possible design, the "determining the mark partition to which the mark point belongs based on the position coordinates of the mark point in the positioning tag" may include:
under the condition that the position coordinates of the mark points are in a first preset coordinate range, determining the mark subarea to which the mark points belong as a positioning subarea;
under the condition that the position coordinates of the mark points are in a second preset coordinate range, determining the mark partition to which the mark points belong as a scene partition; the first preset coordinate range and the second preset coordinate range have no overlapped area.
Optionally, in another possible design manner, the "determining the position identifier represented by the positioning tag according to the position coordinates of the mark points in each mark partition" may include:
in the positioning subarea, determining at least one positioning number according to the position coordinates of the mark points, and determining a positioning identifier represented by the positioning label from a positioning identifier library according to the at least one positioning number; the positioning identification library comprises a corresponding relation between a positioning number and a positioning identification;
in the scene partition, determining at least one scene number according to the position coordinates of the mark points, and determining a scene identifier represented by the positioning label from a scene identifier library according to the at least one scene number; the scene identification library comprises the corresponding relation between the scene number and the scene identification.
Optionally, in another possible design, in a case that the scene identifier is a floor identifier, the "determining location information of the mobile device based on the location identifier" may include:
current location information of the mobile device is determined based on the location identifier and the floor identifier.
Optionally, in another possible design, in a case that the scene identifier is a speed limit identifier, the "determining location information of the mobile device based on the location identifier" may include:
current location information of the mobile device is determined based on the location indicator, and a travel speed of the mobile device is determined based on the speed limit indicator.
Optionally, in another possible design, in a case that the scene identifier is a one-way pass identifier, the "determining location information of the mobile device based on the location identifier" may include:
and determining the current position information of the mobile device based on the positioning identifier, and determining whether the current road section allows the mobile device to pass or not based on the one-way direction of the one-way passing identifier and the driving direction of the mobile device.
Optionally, in another possible design, before the "determining the mark partition to which the mark point belongs based on the position coordinates of the mark point in the positioning tag" may further include:
acquiring a target image comprising a positioning label;
carrying out feature recognition on the target image, and determining a square point and a mark point according to the recognized target image;
establishing a coordinate system according to the position relation among all the positions;
and determining the position coordinates of the marking points according to the relative positions of the marking points in the coordinate system.
Optionally, in another possible design, the "determining the square points and the mark points according to the identified target graph" may include:
and determining a square point and a marking point from the pixel center points of the target graphs according to the pixel areas of the target graphs and the position relationship between the pixel center points of the target graphs.
Optionally, in another possible design manner, the "determining a square point and a mark point from the pixel center point of each target pattern according to the pixel area of each target pattern and the position relationship between the pixel center points of each target pattern" may include:
determining a first graph and a second graph from each target graph according to the pixel area; the pixel area of the first graph is larger than that of the second graph;
determining a third graph and a fourth graph from each second graph according to the position relation between the pixel center point of each second graph and the pixel center point of the first graph;
and determining the pixel center points of the first graph and the third graph as square points, and determining the pixel center point of the fourth graph as a mark point.
In a second aspect, the present application provides an apparatus for determining location information, including: the system comprises a mark partition determining module, a position identification determining module and a position information determining module;
specifically, the mark partition determining module is configured to determine, based on the position coordinates of the mark points in the positioning tag, a mark partition to which the mark points belong; the marking subarea at least comprises a positioning subarea and a scene subarea;
the position identifier determining module is used for determining the position identifier represented by the positioning label in each marking partition determined by the marking partition determining module according to the position coordinates of the marking points; the position mark at least comprises a positioning mark and a scene mark;
and the position information determining module is used for determining the position information of the mobile equipment based on the position identification determined by the position identification determining module.
Optionally, in a possible design manner, the mark partition determining module is specifically configured to:
under the condition that the position coordinates of the mark points are in a first preset coordinate range, determining the mark subarea to which the mark points belong as a positioning subarea;
under the condition that the position coordinates of the mark points are in a second preset coordinate range, determining the mark partition to which the mark points belong as a scene partition; the first preset coordinate range and the second preset coordinate range have no overlapped area.
Optionally, in another possible design, the location identifier determining module is specifically configured to:
in the positioning subarea, determining at least one positioning number according to the position coordinates of the mark points, and determining a positioning identifier represented by the positioning label from a positioning identifier library according to the at least one positioning number; the positioning identification library comprises a corresponding relation between a positioning number and a positioning identification;
in the scene partition, determining at least one scene number according to the position coordinates of the mark points, and determining a scene identifier represented by the positioning label from a scene identifier library according to the at least one scene number; the scene identification library comprises the corresponding relation between the scene number and the scene identification.
Optionally, in another possible design, when the scene identifier is a floor identifier, the location information determining module is specifically configured to:
current location information of the mobile device is determined based on the location identifier and the floor identifier.
Optionally, in another possible design, when the scene identifier is a speed limit identifier, the location information determining module is specifically configured to:
current location information of the mobile device is determined based on the location indicator, and a travel speed of the mobile device is determined based on the speed limit indicator.
Optionally, in another possible design, when the scene identifier is a one-way pass identifier, the location information determining module is specifically configured to:
and determining the current position information of the mobile device based on the positioning identifier, and determining whether the current road section allows the mobile device to pass or not based on the one-way direction of the one-way passing identifier and the driving direction of the mobile device.
Optionally, in another possible design manner, the device for determining location information provided by the present application may further include an obtaining module, an identifying module, an establishing module, and a location coordinate determining module;
the acquisition module is used for acquiring a target image comprising a positioning label;
the identification module is used for carrying out feature identification on the target image acquired by the acquisition module and determining a square point and a mark point according to the identified target image;
the establishing module is used for establishing a coordinate system according to the position relation among all the sites;
and the position coordinate determination module is used for determining the position coordinates of the marking points according to the relative positions of the marking points in the coordinate system established by the establishment module.
Optionally, in another possible design, the identification module is specifically configured to: and determining a square point and a marking point from the pixel center points of the target graphs according to the pixel areas of the target graphs and the position relationship between the pixel center points of the target graphs.
Optionally, in another possible design, the identification module is further specifically configured to:
determining a first graph and a second graph from each target graph according to the pixel area; the pixel area of the first graph is larger than that of the second graph;
determining a third graph and a fourth graph from each second graph according to the position relation between the pixel center point of each second graph and the pixel center point of the first graph;
and determining the pixel center points of the first graph and the third graph as square points, and determining the pixel center point of the fourth graph as a mark point.
In a third aspect, the present application provides a device for determining location information, comprising a memory, a processor, a bus and a communication interface; the memory is used for storing computer execution instructions, and the processor is connected with the memory through a bus; when the position information determination device is operated, the processor executes computer-executable instructions stored in the memory to cause the position information determination device to perform the position information determination method as provided in the above-described first aspect.
Alternatively, in a possible design, the determination device of the location information may be the mobile device itself or a part of the mobile device. For example, may be a system-on-chip in a mobile device. The system-on-chip is adapted to support the determining means of the position information to perform the functions referred to in the first aspect, e.g. to receive, transmit or process data and/or information referred to in the above determining method of the position information. The chip system includes a chip and may also include other discrete devices or circuit structures.
Alternatively, in another possible design, the device for determining the position information may be a physical machine used for implementing the determination of the position information, or may be a part of the physical machine, for example, a system on chip in the physical machine. The system-on-chip is adapted to support the determining means of the position information to perform the functions referred to in the first aspect, e.g. to receive, transmit or process data and/or information referred to in the above determining method of the position information. The chip system includes a chip and may also include other discrete devices or circuit structures.
In a fourth aspect, the present application provides a computer-readable storage medium having instructions stored therein, which when executed by a computer, cause the computer to perform the method of determining location information as provided in the first aspect.
In a fifth aspect, the present application provides a computer program product comprising computer instructions which, when run on a computer, cause the computer to perform the method of determining location information as provided in the first aspect.
It should be noted that all or part of the computer instructions may be stored on the computer readable storage medium. The computer-readable storage medium may be packaged with the processor of the device for determining location information, or may be packaged separately from the processor of the device for determining location information, which is not limited in this application.
For the descriptions of the second, third, fourth and fifth aspects in this application, reference may be made to the detailed description of the first aspect; in addition, for the beneficial effects described in the second aspect, the third aspect, the fourth aspect and the fifth aspect, reference may be made to beneficial effect analysis of the first aspect, and details are not repeated here.
In the present application, the names of the above-mentioned location information determining means do not limit the devices or functional modules themselves, and in actual implementation, these devices or functional modules may appear by other names. Insofar as the functions of the respective devices or functional modules are similar to those of the present application, they fall within the scope of the claims of the present application and their equivalents.
These and other aspects of the present application will be more readily apparent from the following description.
Drawings
Fig. 1 is a schematic flowchart of a method for determining location information according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of two different types of positioning tags according to an embodiment of the present application;
FIG. 3 is a schematic diagram of another exemplary positioning tab provided in an embodiment of the present application;
fig. 4 is a schematic view of a restaurant scene provided in an embodiment of the present application;
fig. 5 is a schematic flowchart of another method for determining location information according to an embodiment of the present disclosure;
fig. 6 is a schematic flowchart of another method for determining location information according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an apparatus for determining location information according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of another apparatus for determining location information according to an embodiment of the present disclosure.
Detailed Description
The following describes a method, an apparatus, and a storage medium for determining location information provided in an embodiment of the present application in detail with reference to the accompanying drawings.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone.
The terms "first" and "second" and the like in the description and drawings of the present application are used for distinguishing different objects or for distinguishing different processes for the same object, and are not used for describing a specific order of the objects.
Furthermore, the terms "including" and "having," and any variations thereof, as referred to in the description of the present application, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
It should be noted that in the embodiments of the present application, words such as "exemplary" or "for example" are used to indicate examples, illustrations or explanations. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
In the description of the present application, the meaning of "a plurality" means two or more unless otherwise specified.
With the development of artificial intelligence technology, the application scenes of the robot are wider and wider. At present, a robot can perform different tasks (such as a transportation task, a shopping guide task or a welcome task) in each application scene instead of manual work. In the prior art, when a robot executes a task, the positioning of the current position needs to be confirmed. Generally, a positioning tag can be attached to a ceiling or a ground of an application scene in advance, and the robot can determine its own position based on information in the acquired positioning tag when moving, so that a navigation path can be planned based on its own position.
However, the existing method for determining the position of the robot based on the positioning tag is only suitable for application scenes with simple environment, and cannot be suitable for application scenes with complex environment.
In view of the problems in the prior art, embodiments of the present application provide a method for determining location information, where different types of location identifiers are determined in different mark partitions, so that not only current location information of a mobile device but also scene information of a current location of the mobile device can be obtained, and a path planning scheme more suitable for a current application scene can be determined. Therefore, the method can be suitable for application scenes with complex environments.
The control method of the mobile equipment provided by the embodiment of the application can be applied to a control device of the mobile equipment. In a possible implementation, the determining means of the location information may be the mobile device itself or a system-on-chip in the mobile device. Taking the example that the determining device of the position information may be the mobile device itself, the mobile device may be provided with a collecting device such as a camera, the collecting device may collect the target image in the moving process of the mobile device, and the mobile device may determine the position information based on the position coordinates of the mark point in the positioning tag in the target image.
Wherein the mobile device may be a robot. Of course, in practical applications, the mobile device may also be other movable artificial intelligence devices.
In another possible implementation manner, the determination device of the location information may be a physical machine (e.g., a background server of the mobile device), or may be a Virtual Machine (VM) deployed on the physical machine. Taking the example that the determining device of the position information may be a backend server, the backend server may obtain an image of an object including a positioning tag from the mobile device, and then may determine the position information for the mobile device based on the position coordinates of the mark point in the positioning tag.
The following describes in detail a method for determining location information provided in an embodiment of the present application, taking the location information determining apparatus as a mobile device itself as an example.
Referring to fig. 1, a method for determining location information provided in an embodiment of the present application includes S101 to S103:
s101, determining a marking partition to which the marking point belongs based on the position coordinates of the marking point in the positioning label.
In order to realize the method in the application scene with complex environment, a path planning scheme more suitable for the current application scene can be determined based on the positioning labels. In the embodiment of the application, different types of mark partitions can be arranged in the positioning label in advance, so that different types of position information can be determined according to mark points in different mark partitions, and the requirement of an application scene with a complex environment is met. Therefore, after the marking points in the positioning tag are determined, the marking partition to which each marking point belongs can be determined based on the position coordinates of each marking point.
Wherein the marking zone at least comprises a positioning zone and a scene zone. In the positioning partition, different positioning can be distinguished through the mark points with different position coordinates, and in the scene partition, different scenes can be distinguished through the mark points with different position coordinates.
Optionally, the method for determining the position information provided in the embodiment of the present application may determine the position coordinates of the mark point by the following method: acquiring a target image comprising a positioning label; carrying out feature recognition on the target image, and determining a square point and a mark point according to the recognized target image; establishing a coordinate system according to the position relation among all the positions; and determining the position coordinates of the marking points according to the relative positions of the marking points in the coordinate system.
Because the problem of mirror image misrecognition may exist when the mobile device recognizes information in the positioning tag, the positioning tag of the embodiment of the application not only includes the mark point, but also includes the position point. The unique coordinates of the marking points in the coordinate system can be determined through the coordinate system established based on the square points, so that the mobile equipment can be prevented from identifying wrong information due to the mirror image problem.
Illustratively, referring to FIG. 2, a stylistic view of two different positioning tabs is provided. As shown in fig. 2, the positioning tag in (a) of fig. 2 includes three target patterns, and the central points of the three target patterns may be respectively determined as mark points a1, a2, and A3. Fig. 2 (B) includes three target patterns, and the central points of the three target patterns may be respectively determined as mark points B1, B2, and B3. It can be seen that (a) and (b) in fig. 2 are a pair of mirror images, and if the mobile device acquires the target image including the two positioning tags, it cannot be distinguished. However, if the same coordinate system is established in both (a) and (B), the position coordinate of the marker point a1 is different from the position coordinate of B1, and the position coordinate of A3 is different from the position coordinate of B3 in the same reference coordinate system, so that in the embodiment of the present application, it is possible to accurately distinguish (a) and (B) of fig. 2 based on the position coordinate of the marker point in the positioning tag.
Taking the positioning tag attached to the ceiling as an example, in a possible implementation manner, the top of the mobile device may be provided with a collection device such as a camera, the collection device may acquire an image on the ceiling in an application scene in real time, the mobile device may perform image processing on a real-time image collected by the collection device, and when it is determined that the real-time image includes the positioning tag, the real-time image may be determined as a target image and further subjected to feature recognition. Optionally, in order to save storage resources, in the case that it is determined that the positioning tag is not included in the real-time image, the real-time image may be deleted.
For example, the mobile device may invoke a pre-trained deep training model to recognize a target pattern in the target image, where the target pattern may be a pattern previously set on the positioning tag. For example, the location tag may be a rectangular tag and the target graphic may be circular sub-tags disposed at different locations of the rectangular tag.
In the embodiment of the present application, in practical application, positioning labels with different mark point arrangement modes need to be attached to different positions, and since the coordinate system for determining the position coordinates of the mark points is established based on the square points in the positioning labels, in order to ensure that the reference coordinate systems of the position coordinates of the mark points in different positioning labels are the same, the square points in different positioning labels in the embodiment of the present application are the same.
Optionally, the square position and the mark point may be determined in the following manner in the embodiment of the present application: and determining a square point and a marking point from the pixel center points of the target graphs according to the pixel areas of the target graphs and the position relationship between the pixel center points of the target graphs.
In general, when a positioning tag is attached to a ceiling, a target pattern provided in the positioning tag is generally a circular shape having a simple shape in order to minimize damage to the decoration style of an application scene. Because the square position points and the mark points need to be determined according to different target graphs, a plurality of target graphs with fixed relative position relations and different sizes can be arranged in the positioning label. Therefore, when the target graphs are all circular with simple shapes, the mobile device can accurately divide the pixel center points of the target graphs into square points and mark points according to the pixel areas of the target graphs in the acquired positioning labels and the position relation between the pixel center points of the target graphs.
Optionally, in a possible implementation manner, the mobile device may determine the first graph and the second graph from each target graph according to the pixel area of each target graph in the collected positioning tag; then, according to the position relation between the pixel center point of each second graph and the pixel center point of the first graph, a third graph and a fourth graph are determined from each second graph; then, the center points of the pixels of the first graph and the third graph can be determined as square points, and the center point of the pixel of the fourth graph can be determined as a mark point.
Exemplarily, referring to fig. 3, a possible schematic diagram of a positioning tag provided in an embodiment of the present application is shown. As shown in fig. 3, the positioning label includes a first graphic and a plurality of second graphics (all the target graphics except the labeled first graphic in fig. 3 are the second graphics), and the plurality of second graphics includes three third graphics and a plurality of fourth graphics (all the target graphics except the labeled first graphic and the labeled third graphic in fig. 3 are the fourth graphics). In order to enable the mobile device to accurately distinguish the pixel center point of each target graphic into a square point and a mark point, in the positioning label provided in fig. 3, the pixel area of the first graphic is larger than that of the second graphic, and three third graphics in the plurality of second graphics are closest to the first graphic.
Taking the positioning tag provided in fig. 3 as an example, after the mobile device acquires the target image including the positioning tag, the mobile device may identify the target image in the positioning tag. Since the pixel area of the first graphic set in the positioning tag is larger than that of the second graphic, the mobile device can accurately distinguish the first graphic from the second graphic based on the pixel area of each target graphic. In addition, since the three third patterns provided in the positioning tag are closest to the first pattern, the mobile device may determine the distance between the pixel center point of each second pattern and the pixel center point of the first pattern, and then determine the second pattern corresponding to the three pixel center points having the smallest distance from the pixel center point of the first pattern as the third pattern, and determine the rest as the fourth pattern. Then, the pixel center points N1, N2, and N3 of the three third patterns, and the pixel center point M of the first pattern may be determined as the orientation points in the embodiment of the present application, and the pixel center point of the fourth pattern may be determined as the marker points in the embodiment of the present application.
In one possible implementation, after the location points M, N1, N2, and N3 are determined, a coordinate system with N2 as the origin of coordinates, N2N1 as the x-axis, and N2N3 as the y-axis may be established based on the cross-multiplication result of the vector MN2 and the vector N2N1, the cross-multiplication result of the vector MN2 and the vector N2N3, and a preset coordinate establishment rule. Then, the position coordinates of the mark points can be obtained based on the straight-line distances from the mark points to the x-axis and the y-axis in the coordinate system.
It can be understood that, in practical applications, if the positioning tag shown in fig. 3 is used as a landmark of a mobile device, when the positioning tag is deployed at different positions, the sizes of the first graph and the third graph of each positioning tag and the positional relationship between each third graph and the first graph should be consistent with fig. 3. Then, the positioning information and the scene information can be distinguished by deploying different arrangement modes of the fourth graph in different positioning tags.
Optionally, after the mobile device determines the position coordinates of the mark point, if the position coordinates of the mark point are within the first preset coordinate range, it may be determined that the mark partition to which the mark point belongs is the positioning partition, and if the position coordinates of the mark point are within the second preset coordinate range, it may be determined that the mark partition to which the mark point belongs is the scene partition.
The first preset coordinate range and the second preset coordinate range may be two coordinate ranges of a predetermined non-overlapping area.
In order to enable the mobile device to accurately distinguish the mark subarea to which the mark points belong by positioning the position coordinates of each mark point in the label, the embodiment of the application can preset a first preset coordinate range and a second preset coordinate range of a non-overlapped area, and the mobile device can determine the mark subarea to which the mobile device belongs based on the coordinate range to which the position coordinates of the mark points belong.
Further, in order to enable the mobile device to quickly determine the mark partition to which the mark point belongs based on the position coordinate of the mark point, so as to save the computing resource, optionally, the positioning tag may be divided into different mark partitions according to the quadrant of the coordinate system, so that the mobile device may quickly determine the mark partition to which the mark point belongs according to the coordinate quadrant in which the position coordinate of the mark point belongs.
For example, taking the coordinate system established in fig. 3 as an example, the first predetermined coordinate range may be y <0, corresponding to the third quadrant and the fourth quadrant in the coordinate system (i.e., the lower half of the coordinate axis in fig. 3), and the second predetermined coordinate range may be y >0 and x <0, corresponding to the second quadrant in the coordinate system (i.e., the upper right region of the coordinate axis in fig. 3). Then, when the position coordinates of the mark point are in the third quadrant and the fourth quadrant in fig. 3, the mark partition to which the mark point belongs may be determined as the positioning partition; when the position coordinates of the marker point are in the second quadrant in fig. 3, the marker partition to which the marker point belongs may be determined as the scene partition.
And S102, determining the position identifier represented by the positioning label according to the position coordinates of the marking points in each marking partition.
In the embodiment of the application, different position identifications can be represented by the mark points of different position coordinates, so that different position information can be determined based on different position identifications.
The position mark at least comprises a positioning mark and a scene mark.
Optionally, in a possible implementation manner, in the positioning partition, at least one positioning number may be determined according to the position coordinates of the mark points, and a positioning identifier represented by the positioning tag is determined from the positioning identifier library according to the at least one positioning number; in the scene partition, at least one scene number can be determined according to the position coordinates of the mark points, and the scene identifier represented by the positioning label is determined from the scene identifier library according to the at least one scene number.
In order to reduce the number of target graphics in the positioning tag and further reduce damage to the decoration style of the application scene after the positioning tag is attached, in the embodiment of the application, the mark points of different position coordinates can be marked as different numbers, and then the numbers can be combined to obtain a new number. Therefore, under the condition that the number of the target graphics is limited, more positioning numbers and scene numbers can be obtained in a combined mode.
The positioning identifier library may include a corresponding relationship between a positioning number and a positioning identifier, and the scene identifier library may include a corresponding relationship between a scene number and a scene identifier.
Taking fig. 3 as an example, the marker points P1, P2, and P3 respectively represent different positioning numbers, and the marker points Q1 and Q2 may represent different scene numbers, so the positioning identifier library may respectively include the corresponding relationships between P1, P2, P3, P1P2, P1P3, P2P3, and P1P2P3 and different positioning identifiers, and the scene identifier library may include the corresponding relationships between Q1, Q2, and Q1Q2 and different scene identifiers. It can be seen that, in the embodiments of the present application, by combining the mark points in each mark partition, the number of mark points in the positioning label can be reduced, thereby reducing the number of target graphics. For example, if the positioning tag of the pattern of FIG. 3 is used as a road sign, the number of marking points in the third and fourth quadrants may be 218And (4) a combination mode.
And S103, determining the position information of the mobile equipment based on the position identification.
According to the embodiment of the application, the position identifications of different types and different numbers can be determined based on the mark points of different position coordinates, so that different position information can be determined based on the position identifications of different types and different numbers, and the requirements of application scenes with complex environments are met.
Optionally, in a possible implementation manner, the scene identifier may be a floor identifier, and the mobile device may determine current location information of the mobile device based on the positioning identifier and the floor identifier.
With the wide application of mobile devices, at present, a mobile device may be applied to an application scenario with multiple floors, and in order to enable the mobile device to distinguish different floors, a scenario identifier in this embodiment of the application may be a floor identifier.
For example, the positioning tag may be attached to the ceiling of the elevator waiting area, and the mobile device may perform the identification of the floor and the determination of the current position based on the acquired target image including the positioning tag when moving across floors while riding the elevator.
Optionally, in another possible implementation manner, the scene identifier may be a speed limit identifier, and the mobile device may determine current location information of the mobile device based on the location identifier and determine a driving speed of the mobile device based on the speed limit identifier.
At present, the mobile device can be applied to application scenes with relatively flat terrain, and also can be applied to application scenes with relatively concave-convex terrain or carpet paved on the terrain. In order to ensure that the mobile device does not jam or block the shell when moving in the application scene of the relative unsmooth or carpet of relief, this application embodiment can set up the relative unsmooth or carpet's of relief region as the region of slowing down. In order to enable the mobile device to recognize the set deceleration area, the scene identifier in the embodiment of the present application may be a speed limit identifier.
In addition, different deceleration areas can be set according to the concave-convex degree of the road surface in the embodiment of the application, and positioning labels with different speed limit signs can be pasted in the different deceleration areas. After the mobile device obtains the positioning tag comprising the speed limit sign, the mobile device can control the driving speed according to the speed range corresponding to the speed limit sign.
Optionally, in yet another possible implementation manner, the scene identifier may be a one-way traffic identifier, the mobile device may determine current location information of the mobile device based on the location identifier, and determine whether the mobile device is allowed to pass through the current road segment based on a one-way direction of the one-way traffic identifier and a driving direction of the mobile device.
With the wide application of the mobile devices, the number of the mobile devices deployed in each application scene is increased, and a situation that a path is congested and needs to wait often occurs, so that the efficiency of the mobile devices in executing tasks is affected. In order to reduce the occurrence of the path congestion, the embodiment of the application can set a one-way area in an area with large mobile device traffic flow (for example, an area leading from a kitchen to a restaurant). In order to enable the mobile device to recognize the set single-line region and the direction of the single-line region, the scene identifier in the embodiment of the present application may be a one-way passing identifier. Therefore, the mobile equipment can better plan the path according to the indication of the one-way traffic identification in the moving process, and congestion is avoided.
Illustratively, referring to FIG. 4, a scene diagram of a restaurant is provided. As shown in fig. 4, the scene includes 5 dining areas, so as to avoid a path congestion situation that may occur when the number of mobile devices is large, a road segment between dining area 1 and dining area 2 may be set as one-way area a, a road segment between dining area 3 and dining area 2 may be set as one-way area B, and a traffic direction of the one-way area is shown by an arrow. Therefore, after receiving the meal taking instruction, the mobile equipment can go to a kitchen meal outlet from the stop area to take meals, then sends meals along the one-way area A, and can return to the stop area through the one-way area B after the meals are sent.
It can be understood that, in practical applications, the scene identifiers in the positioning tag provided in the embodiment of the present application may be various, which is not limited in the embodiment of the present application, and the user may set the scene identifiers according to needs. For example, taking the positioning tag provided in fig. 3 as an example, the lower left area of the coordinate axis may be determined as a positioning partition for distinguishing different positions. The lower right area of the coordinate axis may be determined as a scene division for identifying the direction of the one-line area, and the upper right area of the coordinate axis may be determined as a scene division for identifying the floor.
According to the technical scheme provided by the embodiment of the application, the mark subarea to which each mark point belongs can be determined according to the position coordinates of the mark points in the positioning label, different types of position identifications are determined in different mark subareas, and then the position information of the mobile equipment can be determined by integrating the different types of position identifications. It can be seen that, in the technical scheme provided by the application, the positioning tag is divided into at least two types of mark partitions, namely the positioning partition and the scene partition, so that the positioning identifier can be determined according to the position coordinates of each mark point in the positioning partition, and the scene identifier can be determined according to the position coordinates of each mark point in the scene partition. Different scene identifiers can represent different application scenes, so that the current position information of the mobile device can be acquired based on the positioning identifier, and the scene information (such as floor information) of the current position of the mobile device can be acquired based on the scene identifiers. In this way, in some application scenarios with a complex environment, the current location information and the scenario information of the mobile device may be combined to determine a path planning scheme (i.e., the location information in the embodiment of the present application) more suitable for the current application scenario. Therefore, the technical scheme provided by the embodiment of the application can be suitable for application scenes with complex environments.
In summary, as shown in fig. 5, an embodiment of the present application further provides a method for determining location information, including S501-S509:
s501, obtaining a target image comprising the positioning label.
S502, carrying out feature recognition on the target image, and determining a first graph and a second graph from each target graph according to the pixel area of each recognized target graph.
S503, according to the position relation between the pixel center point of each second graph and the pixel center point of the first graph, determining a third graph and a fourth graph from each second graph.
S504, determining the pixel center points of the first graph and the third graph as square points, and determining the pixel center point of the fourth graph as a mark point.
And S505, establishing a coordinate system according to the position relation among the sites.
And S506, determining the position coordinates of the marking points according to the relative positions of the marking points in the coordinate system.
And S507, determining the mark subarea to which the mark point belongs based on the position coordinates of the mark point in the positioning label.
And S508, determining the position identifier represented by the positioning label according to the position coordinates of the marking points in each marking partition.
S509, determining the position information of the mobile equipment based on the position identification.
Optionally, as shown in fig. 6, an embodiment of the present application further provides a method for determining location information, including S601-S604:
s601, obtaining the position coordinates of the mark points in the positioning labels.
S602, under the condition that the position coordinates of the mark points are in a first preset coordinate range, determining the mark partitions to which the mark points belong as positioning partitions; and under the condition that the position coordinates of the mark points are in a second preset coordinate range, determining the mark partition to which the mark points belong as a scene partition.
S603, in the positioning partition, determining at least one positioning number according to the position coordinates of the mark points, and determining a positioning identifier represented by the positioning label from a positioning identifier library according to the at least one positioning number; and in the scene partition, determining at least one scene number according to the position coordinates of the mark points, and determining the scene identifier represented by the positioning label from the scene identifier library according to the at least one scene number.
And S604, determining the position information of the mobile equipment according to the scene identifier and the positioning identifier.
As shown in fig. 7, an embodiment of the present application further provides a device for determining location information, where the device for determining location information may include: a marker partition determination module 11, a location identity determination module 12 and a location information determination module 13.
The marker partition determining module 11 performs S101 in the above method embodiment, the location identifier determining module 12 performs S102 in the above method embodiment, and the location information determining module 13 performs S103 in the above method embodiment.
Specifically, the mark partition determining module 11 is configured to determine, based on the position coordinates of the mark points in the positioning tag, a mark partition to which the mark points belong; the marking subarea at least comprises a positioning subarea and a scene subarea;
a position identifier determining module 12, configured to determine, in each marker partition determined by the marker partition determining module 11, a position identifier represented by the positioning tag according to the position coordinates of the marker point; the position mark at least comprises a positioning mark and a scene mark;
a location information determining module 13, configured to determine location information of the mobile device based on the location identifier determined by the location identifier determining module 12.
Optionally, in a possible implementation manner, the mark partition determining module 11 is specifically configured to:
under the condition that the position coordinates of the mark points are in a first preset coordinate range, determining the mark subarea to which the mark points belong as a positioning subarea;
under the condition that the position coordinates of the mark points are in a second preset coordinate range, determining the mark partition to which the mark points belong as a scene partition; the first preset coordinate range and the second preset coordinate range have no overlapped area.
Optionally, in another possible implementation manner, the location identifier determining module 12 is specifically configured to:
in the positioning subarea, determining at least one positioning number according to the position coordinates of the mark points, and determining a positioning identifier represented by the positioning label from a positioning identifier library according to the at least one positioning number; the positioning identification library comprises a corresponding relation between a positioning number and a positioning identification;
in the scene partition, determining at least one scene number according to the position coordinates of the mark points, and determining a scene identifier represented by the positioning label from a scene identifier library according to the at least one scene number; the scene identification library comprises the corresponding relation between the scene number and the scene identification.
Optionally, in another possible implementation manner, in the case that the scene identifier is a floor identifier, the location information determining module 13 is specifically configured to:
current location information of the mobile device is determined based on the location identifier and the floor identifier.
Optionally, in another possible implementation manner, in the case that the scene identifier is a speed limit identifier, the location information determining module 13 is specifically configured to:
current location information of the mobile device is determined based on the location indicator, and a travel speed of the mobile device is determined based on the speed limit indicator.
Optionally, in another possible implementation manner, in the case that the scene identifier is a one-way pass identifier, the location information determining module 13 is specifically configured to:
and determining the current position information of the mobile device based on the positioning identifier, and determining whether the current road section allows the mobile device to pass or not based on the one-way direction of the one-way passing identifier and the driving direction of the mobile device.
Optionally, in another possible implementation manner, the device for determining location information provided by the present application may further include an obtaining module, an identifying module, an establishing module, and a location coordinate determining module;
the acquisition module is used for acquiring a target image comprising a positioning label;
the identification module is used for carrying out feature identification on the target image acquired by the acquisition module and determining a square point and a mark point according to the identified target image;
the establishing module is used for establishing a coordinate system according to the position relation among all the sites;
and the position coordinate determination module is used for determining the position coordinates of the marking points according to the relative positions of the marking points in the coordinate system established by the establishment module.
Optionally, in another possible implementation manner, the identification module is specifically configured to: and determining a square point and a marking point from the pixel center points of the target graphs according to the pixel areas of the target graphs and the position relationship between the pixel center points of the target graphs.
Optionally, in another possible implementation manner, the identification module is further specifically configured to:
determining a first graph and a second graph from each target graph according to the pixel area; the pixel area of the first graph is larger than that of the second graph;
determining a third graph and a fourth graph from each second graph according to the position relation between the pixel center point of each second graph and the pixel center point of the first graph;
and determining the pixel center points of the first graph and the third graph as square points, and determining the pixel center point of the fourth graph as a mark point.
Optionally, the device for determining location information may further include a storage module, and the storage module is configured to store the program code of the device for determining location information, and the like.
As shown in fig. 8, the embodiment of the present application further provides a device for determining position information, which includes a memory 41, processors 42(42-1 and 42-2), a bus 43, and a communication interface 44; the memory 41 is used for storing computer execution instructions, and the processor 42 is connected with the memory 41 through a bus 43; when the position information determination device is operated, the processor 42 executes computer-executable instructions stored in the memory 41 to cause the position information determination device to perform the position information determination method provided in the above-described embodiment.
In particular implementations, processor 42 may include one or more Central Processing Units (CPUs), such as CPU0 and CPU1 shown in FIG. 8, as one embodiment. And as an example, the means for determining location information may comprise a plurality of processors 42, such as processor 42-1 and processor 42-2 shown in fig. 8. Each of the processors 42 may be a single-Core Processor (CPU) or a multi-Core Processor (CPU). Processor 42 may refer herein to one or more devices, circuits, and/or processing cores that process data (e.g., computer program instructions).
The memory 41 may be, but is not limited to, a read-only memory 41 (ROM) or other type of static storage device that can store static information and instructions, a Random Access Memory (RAM) or other type of dynamic storage device that can store information and instructions, an electrically erasable programmable read-only memory (EEPROM), a compact disc read-only memory (CD-ROM) or other optical disc storage, optical disc storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory 41 may be self-contained and coupled to the processor 42 via a bus 43. The memory 41 may also be integrated with the processor 42.
In a specific implementation, the memory 41 is used for storing data in the present application and computer-executable instructions corresponding to software programs for executing the present application. The processor 42 may operate or execute software programs stored in the memory 41 and invoke various functions of the data, location information determining means stored in the memory 41.
The communication interface 44 is any device, such as a transceiver, for communicating with other devices or communication networks, such as a control system, a Radio Access Network (RAN), a Wireless Local Area Network (WLAN), and the like. The communication interface 44 may include a receiving unit implementing a receiving function and a transmitting unit implementing a transmitting function.
The bus 43 may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an extended ISA (enhanced industry standard architecture) bus, or the like. The bus 43 may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 8, but this is not intended to represent only one bus or type of bus.
As an example, in connection with fig. 7, the acquiring means in the position information determining means performs the same function as the receiving unit in fig. 8, the position information determining means in the position information determining means performs the same function as the processor in fig. 8, and the storage means in the position information determining means performs the same function as the memory in fig. 8.
For the explanation of the related contents in this embodiment, reference may be made to the above method embodiments, which are not described herein again.
Through the above description of the embodiments, it is clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions. For the specific working processes of the system, the apparatus and the unit described above, reference may be made to the corresponding processes in the foregoing method embodiments, and details are not described here again.
The embodiment of the present application further provides a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the instructions are executed by a computer, the computer is enabled to execute the method for determining location information provided in the foregoing embodiment.
The computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a RAM, a ROM, an erasable programmable read-only memory (EPROM), a register, a hard disk, an optical fiber, a CD-ROM, an optical storage device, a magnetic storage device, any suitable combination of the foregoing, or any other form of computer readable storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an Application Specific Integrated Circuit (ASIC). In embodiments of the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The above description is only an embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (12)

1. A method for determining location information, comprising:
determining a marking partition to which a marking point belongs based on the position coordinate of the marking point in the positioning label; the marking subarea at least comprises a positioning subarea and a scene subarea;
in each marking subarea, determining a position identifier represented by the positioning label according to the position coordinates of the marking points; the position mark at least comprises a positioning mark and a scene mark;
determining location information of the mobile device based on the location identity.
2. The method for determining the position information according to claim 1, wherein the determining the mark partition to which the mark point belongs based on the position coordinates of the mark point in the positioning tag comprises:
under the condition that the position coordinates of the mark points are in a first preset coordinate range, determining the mark subarea to which the mark points belong as the positioning subarea;
under the condition that the position coordinates of the mark points are in a second preset coordinate range, determining the mark partition to which the mark points belong as the scene partition; the first preset coordinate range and the second preset coordinate range have no overlapped area.
3. The method for determining the position information according to claim 1, wherein the determining, in each of the mark partitions, the position identifier represented by the positioning tag according to the position coordinates of the mark points includes:
in the positioning subarea, determining at least one positioning number according to the position coordinates of the mark points, and determining a positioning identifier represented by the positioning label from a positioning identifier library according to the at least one positioning number; the positioning identification library comprises the corresponding relation between the positioning serial number and the positioning identification;
in the scene partition, determining at least one scene number according to the position coordinates of the mark points, and determining a scene identifier represented by the positioning label from a scene identifier library according to the at least one scene number; the scene identification library comprises the corresponding relation between the scene number and the scene identification.
4. The method of claim 1, wherein in the case that the scene identifier is a floor identifier, the determining the location information of the mobile device based on the location identifier comprises:
determining current location information of the mobile device based on the location identifier and the floor identifier.
5. The method for determining the location information according to claim 1, wherein in a case that the scene identifier is a speed limit identifier, the determining the location information of the mobile device based on the location identifier includes:
and determining the current position information of the mobile equipment based on the positioning identifier, and determining the running speed of the mobile equipment based on the speed limit identifier.
6. The method for determining location information according to claim 1, wherein in case that the scene identifier is a one-way pass identifier, said determining location information of a mobile device based on the location identifier comprises:
determining the current position information of the mobile device based on the positioning identifier, and determining whether the mobile device is allowed to pass through the current road section based on the one-way passing direction of the one-way passing identifier and the driving direction of the mobile device.
7. The method according to claim 1, wherein before determining the marker partition to which the marker point belongs based on the position coordinates of the marker point in the positioning tag, the method further comprises:
acquiring a target image comprising the positioning label;
performing feature recognition on the target image, and determining a square point and the mark point according to the recognized target image;
establishing a coordinate system according to the position relation among the square points;
and determining the position coordinates of the marking points according to the relative positions of the marking points in the coordinate system.
8. The method for determining position information according to claim 7, wherein the determining of the orientation point and the mark point according to the identified target figure comprises:
and determining the square point and the mark point from the pixel center point of each target graph according to the pixel area of each target graph and the position relation between the pixel center points of each target graph.
9. The method according to claim 8, wherein the determining the square point and the mark point from the pixel center point of each target pattern according to the pixel area of each target pattern and the positional relationship between the pixel center points of each target pattern comprises:
determining a first graph and a second graph from each target graph according to the pixel area; the pixel area of the first graph is larger than that of the second graph;
determining a third graph and a fourth graph from each second graph according to the position relation between the pixel center point of each second graph and the pixel center point of the first graph;
and determining the pixel center points of the first graph and the third graph as the square points, and determining the pixel center point of the fourth graph as the mark point.
10. An apparatus for determining position information, comprising:
the mark partition determining module is used for determining a mark partition to which the mark point belongs based on the position coordinate of the mark point in the positioning label; the marking subarea at least comprises a positioning subarea and a scene subarea;
the position identifier determining module is used for determining the position identifier represented by the positioning label according to the position coordinates of the marking points in each marking partition determined by the marking partition determining module; the position mark at least comprises a positioning mark and a scene mark;
a location information determination module to determine location information of the mobile device based on the location identity determined by the location identity determination module.
11. The device for determining the position information is characterized by comprising a memory, a processor, a bus and a communication interface; the memory is used for storing computer execution instructions, and the processor is connected with the memory through the bus;
when the position information determining device is operated, the processor executes the computer-executable instructions stored in the memory to cause the position information determining device to perform the position information determining method according to any one of claims 1 to 9.
12. A computer-readable storage medium having stored therein instructions, which when executed by a computer, cause the computer to execute a method of determining location information according to any one of claims 1 to 9.
CN202111454177.5A 2021-12-01 2021-12-01 Method, device and storage medium for determining location information Active CN114140520B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111454177.5A CN114140520B (en) 2021-12-01 2021-12-01 Method, device and storage medium for determining location information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111454177.5A CN114140520B (en) 2021-12-01 2021-12-01 Method, device and storage medium for determining location information

Publications (2)

Publication Number Publication Date
CN114140520A true CN114140520A (en) 2022-03-04
CN114140520B CN114140520B (en) 2025-02-11

Family

ID=80386626

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111454177.5A Active CN114140520B (en) 2021-12-01 2021-12-01 Method, device and storage medium for determining location information

Country Status (1)

Country Link
CN (1) CN114140520B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115100649A (en) * 2022-05-06 2022-09-23 广东虚拟现实科技有限公司 Method, device, electronic device and storage medium for generating marking pattern

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190128677A1 (en) * 2017-05-11 2019-05-02 Manuj Naman Autonomously Moving Machine and Method for Operating an Autonomously Moving Machine
CN111256701A (en) * 2020-04-26 2020-06-09 北京外号信息技术有限公司 Equipment positioning method and system
CN111537954A (en) * 2020-04-20 2020-08-14 孙剑 Real-time high-dynamic fusion positioning method and device
CN112013850A (en) * 2020-10-16 2020-12-01 北京猎户星空科技有限公司 Positioning method, positioning device, self-moving equipment and storage medium
CN113393515A (en) * 2021-05-21 2021-09-14 杭州易现先进科技有限公司 Visual positioning method and system combined with scene labeling information

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190128677A1 (en) * 2017-05-11 2019-05-02 Manuj Naman Autonomously Moving Machine and Method for Operating an Autonomously Moving Machine
CN111537954A (en) * 2020-04-20 2020-08-14 孙剑 Real-time high-dynamic fusion positioning method and device
CN111256701A (en) * 2020-04-26 2020-06-09 北京外号信息技术有限公司 Equipment positioning method and system
WO2021218546A1 (en) * 2020-04-26 2021-11-04 北京外号信息技术有限公司 Device positioning method and system
CN112013850A (en) * 2020-10-16 2020-12-01 北京猎户星空科技有限公司 Positioning method, positioning device, self-moving equipment and storage medium
CN113393515A (en) * 2021-05-21 2021-09-14 杭州易现先进科技有限公司 Visual positioning method and system combined with scene labeling information

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YAGO VICENTE: "Large-Scale Weakly-Supervised Shadow Detection", 《STATE UNIVERSITY OF NEW YORK AT STONY BROOK》, 31 December 2018 (2018-12-31) *
李亚: "基于RFID室内定位方法研究", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》, no. 1, 15 January 2019 (2019-01-15) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115100649A (en) * 2022-05-06 2022-09-23 广东虚拟现实科技有限公司 Method, device, electronic device and storage medium for generating marking pattern

Also Published As

Publication number Publication date
CN114140520B (en) 2025-02-11

Similar Documents

Publication Publication Date Title
US10540796B2 (en) Ground plane detection for placement of augmented reality objects
EP3506212A1 (en) Method and apparatus for generating raster map
CN109658725B (en) Parking lot vehicle searching method, device and system, computer equipment and storage medium
CN108876857A (en) Localization method, system, equipment and the storage medium of automatic driving vehicle
US20220113156A1 (en) Method, apparatus and system for generating real scene map
KR20150034997A (en) Method and system for notifying destination by route guide
CN108326845A (en) Robot localization method, apparatus and system based on binocular camera and laser radar
CN110838178A (en) Method and device for determining road scene model
CN114140520B (en) Method, device and storage medium for determining location information
US11176824B2 (en) Identification and performance of an action related to a poorly parked vehicle
JP2015200504A (en) User terminal positional information specifying device
CN112911605A (en) Base station planning method and device
US20180018799A1 (en) Method and apparatus for non-occluding overlay of user interface or information elements on a contextual mao
CN112700464B (en) Map information processing method and device, electronic equipment and storage medium
TWI585365B (en) Indoor navigation system and method based on relevancy of road signs
Nene et al. A study of vehicular parking systems
CN112748739A (en) Control method and device of mobile equipment, computer readable storage medium and system
CN113959432B (en) Method, device and storage medium for determining following path of mobile equipment
CN113103224A (en) Avoidance method and device for mobile equipment and computer readable storage medium
WO2024253311A1 (en) Indoor navigation method using place names and application
CN113822995B (en) Method, device and storage medium for creating navigation map of mobile device
CN115879186B (en) Method, device, equipment and storage medium for determining placement position of part number
CN105677843A (en) Method for automatically obtaining attribute of four boundaries of parcel
CN109246606B (en) Expansion method and device of robot positioning network, terminal equipment and storage medium
CN111582352A (en) Object-based sensing method and device, robot and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant