CN113269976B - Positioning method and device - Google Patents
Positioning method and device Download PDFInfo
- Publication number
- CN113269976B CN113269976B CN202110342456.6A CN202110342456A CN113269976B CN 113269976 B CN113269976 B CN 113269976B CN 202110342456 A CN202110342456 A CN 202110342456A CN 113269976 B CN113269976 B CN 113269976B
- Authority
- CN
- China
- Prior art keywords
- server
- lane
- target
- camera
- terminal equipment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/09—Arrangements for giving variable traffic instructions
- G08G1/0962—Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
- G08G1/0968—Systems involving transmission of navigation instructions to the vehicle
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/28—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/3407—Route searching; Route guidance specially adapted for specific applications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S19/00—Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
- G01S19/38—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
- G01S19/39—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
- G01S19/42—Determining position
- G01S19/45—Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/017—Detecting movement of traffic to be counted or controlled identifying vehicles
- G08G1/0175—Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Computer Networks & Wireless Communication (AREA)
- Navigation (AREA)
- Traffic Control Systems (AREA)
Abstract
The embodiment of the application provides a positioning method and a positioning device, relates to the technical field of terminals, and is applied to a positioning system, wherein the positioning system comprises: terminal equipment and first server, the method includes: the terminal equipment sends an identifier, an initial position and a destination of a target object needing navigation to a first server; the first server sends a first navigation route to the terminal equipment according to the starting position and the destination; the method comprises the steps that in the process that the terminal device drives according to a first navigation route, the terminal device reports position information of the terminal device to a first server; when the position information reflects that the terminal equipment is about to drive into the intersection, the first server acquires a target lane where the terminal equipment is located; the method comprises the steps that a first server sends indication information used for indicating a target lane to a terminal device; and the terminal equipment prompts the target lane where the user is located according to the indication information. Therefore, the terminal equipment can accurately judge which lane the user is located in based on the corresponding relation between the camera and the lane.
Description
Technical Field
The present application relates to the field of terminal technologies, and in particular, to a positioning method and apparatus.
Background
With the development of urban traffic, the number of the overpasses is more and more, and the construction of the overpasses is more and more complex. Such as overpasses comprising multiple layers of road surfaces. When a user drives into the overpass by navigation, the overpass is provided with a plurality of layers of road surfaces at the same position, so that the navigation cannot distinguish which layer of the overpass the user is positioned at, and an accurate navigation route cannot be provided according to an accurate road surface layer; especially, after the user goes wrong according to the navigation route, the navigation software cannot identify which layer of the overpass the user is currently located on in time, and further cannot update the correct navigation route in time, so that great trouble is brought to the user in the overpass driving process.
In the existing possible design, Radio Frequency Identification (RFID) can be used to locate the road surface layer of the overpass. For example, RFID tags may be placed on a road surface, and the location of the road surface layer where the vehicle is located may be identified based on signals transmitted by the RFID tags received when the vehicle passes through the road surface.
However, since the identification width of the RFID is limited, a situation that the vehicle cannot identify the RFID tag of the road surface to send a signal may occur due to the fact that the vehicle in the road is far away from the RFID tag, and thus the road surface layer where the vehicle is located in the overpass cannot be accurately positioned; in addition, the RFID is paved on the flyover road surface, on one hand, the RFID is easily crushed by vehicles, on the other hand, the existing road surface needs to be damaged for paving, and the engineering quantity is large.
Disclosure of Invention
The embodiment of the application provides a positioning method and a positioning device, which can accurately judge which lane of a plurality of lanes the vehicle is located in, and further a terminal device receiving the lane information can navigate based on the accurate lane.
In a first aspect, an embodiment of the present application provides a positioning method, which is applied to a positioning system, where the positioning system includes: terminal equipment and first server, the method includes: the terminal equipment sends an identification, an initial position and a destination of a target object needing navigation to a first server; the first server sends a first navigation route to the terminal equipment according to the starting position and the destination; the method comprises the steps that in the process that the terminal device drives according to a first navigation route, the terminal device reports position information of the terminal device to a first server; when the position information reflects that the terminal equipment is about to drive into the intersection, the first server acquires a target lane where the terminal equipment is located; the system comprises a road junction, a plurality of cameras and a control module, wherein the plurality of cameras are arranged in the road junction and are used for shooting objects in different lanes in the road junction; the target lane is determined by the first server based on the content obtained by the camera, or the target lane is determined by the first server according to the information received from the second server; the method comprises the steps that a first server sends indication information used for indicating a target lane to a terminal device; and the terminal equipment prompts the target lane where the user is located according to the indication information. Therefore, the terminal equipment can accurately judge which lane the user is located in based on the corresponding relation between the camera and the lane, and the terminal equipment receiving the lane information can navigate based on the accurate lane.
Wherein, the lane can be understood as a road layer or a road layer; the first server may be a navigation server; the second server may be a traffic platform server; the intersection can be understood as an intersection with a plurality of layers of pavements or an intersection with a plurality of roads; the first navigation route may be understood as a navigation route obtained based on GPS positioning of the terminal device. The terminal equipment can be a mobile phone or a vehicle.
In one possible implementation manner, the acquiring, by the first server, a target lane in which the terminal device is located includes: the method comprises the steps that a first server obtains a plurality of first association relations based on a plurality of cameras shooting objects in a plurality of lanes of an intersection, wherein any one first association relation comprises an image and an identifier of a camera shooting the image; when the first server identifies the identification of the target object in the plurality of images, the first server determines a target camera corresponding to the target image comprising the identification of the target object; the first server determines a target lane where the target camera is located according to the second association relation; the second association relationship includes a correspondence relationship between the camera and the lane. Therefore, the first server can accurately judge which lane the user is located in based on the corresponding relation between the camera and the lane, and the terminal device can navigate based on the accurate lane sent by the first server.
The object in the lane can be a license plate of a vehicle in the lane; the image can be a license plate photo containing license plate information; the identification of the camera can be a camera number; the identification corresponding to the target can be a license plate number; the target lane may be a lane obtained based on a correspondence relationship between the cameras and the lanes.
In one possible implementation manner, the acquiring, by the first server, a target lane in which the terminal device is located includes: the first server sends a query request to the second server, the query request including an identification of the target object and any one of: position information of the target object or an identification of the intersection; the first server receives an identification of a target lane from the second server. Therefore, when the second server stores the corresponding relation between the camera and the lane, the second server can accurately judge which lane the user is located in based on the corresponding relation between the camera and the lane, and send the information of the lane to the first server, and then the terminal device can navigate based on the accurate lane sent by the first server.
In one possible implementation manner, the acquiring, by the first server, a target lane in which the terminal device is located includes: the first server sends a query request to the second server, the query request including an identification of the target object and any one of: position information of the target object or an identification of the intersection; the method comprises the steps that a first server receives an identification of a target camera from a second server, wherein the target camera is a camera for shooting a target object; the first server determines a target lane where the target camera is located according to the second association relation; the second association relationship includes a correspondence relationship between the camera and the lane. Therefore, when the first server stores the corresponding relation between the camera and the lane, the first server can accurately judge which lane the user is located in based on the corresponding relation between the camera and the lane, and then the terminal device can navigate based on the accurate lane sent by the first server.
In one possible implementation, the method further includes: when the target lane is different from the lane indicated in the first navigation route, the first server transmits a second navigation route to the terminal device according to the target lane and the destination. Therefore, the terminal equipment can provide a more accurate navigation route for the user according to different lanes determined in different scenes.
The second navigation route may be a navigation route corresponding to the target lane obtained based on a correspondence between the camera and the lane.
In a possible implementation manner, after the first server sends the second navigation route to the terminal device according to the target lane and the destination, the method further includes: when the first server receives the position information from the terminal equipment in the first time period, the first server continuously navigates the terminal equipment according to the second navigation route in the first time period; when the first server receives the position information from the terminal equipment after the first time period, the first server navigates the terminal equipment according to the position information of the terminal equipment received after the first time period. Therefore, more accurate lane information in the period of time can be obtained under different time conditions, and the navigation software can provide a more accurate navigation route for a user based on the accurate lane information.
In one possible implementation, the method further comprises: the first server sets a first weight for a lane indicated in the first navigation route and sets a second weight for a target lane according to the environment information; when the environment information shows that the environment is not beneficial to image recognition, the first weight is larger than the second weight; when the environment information indicates that the environment does not influence the image recognition, the first weight is smaller than the second weight; when the target lane is different from the lane indicated in the first navigation route, the first server transmits a second navigation route to the terminal device according to the target lane and a lane with a large weight in the lanes indicated in the first navigation route, and the destination. Therefore, different weights are set for lanes determined by different devices, more accurate lane information can be obtained according to the weights under different application conditions, and the terminal device can provide accurate navigation routes for users based on the accurate lane information.
Wherein, this influence image identification's environment can be for thunderstorm day or haze day etc. and weather is bad, perhaps the lower environment of visibility.
In one possible implementation, the method further includes: when the target lane is different from the lane indicated in the first navigation route and the distance between the target lane and the lane indicated in the first navigation route is greater than the distance threshold, the first server continuously navigates the terminal device according to the first navigation route. Therefore, the terminal equipment can judge the distance between the lanes obtained in different scenes and obtain more accurate lane information, and the navigation software can provide an accurate navigation route for a user based on the accurate lane information.
When the distance between the target lane and the lane indicated in the first navigation route is greater than the distance threshold, it can be understood that the target lane determined based on the corresponding relationship between the camera and the lane may not be accurate enough, and at this time, the first navigation route may be used for terminal device navigation.
In one possible implementation manner, the identifier of the target object is a license plate number, and the terminal device is a mobile phone or a vehicle.
In a second aspect, an embodiment of the present application provides a positioning method, where the method includes: the method comprises the steps that a first server receives an identification, a starting position and a destination of a target object needing navigation from a terminal device; the first server sends a first navigation route to the terminal equipment according to the starting position and the destination; the method comprises the steps that a first server receives position information of a terminal device in the process of driving a first navigation route; when the position information reflects that the terminal equipment is about to drive into the intersection, the first server acquires a target lane where the terminal equipment is located; the system comprises a road junction, a plurality of cameras, a plurality of image acquisition devices and a plurality of image processing devices, wherein the road junction is provided with the plurality of cameras which are used for shooting objects in different lanes in the road junction; the target lane is determined by the first server based on the content obtained by the camera, or the target lane is determined by the first server according to the information received from the second server; the first server transmits indication information indicating a target lane to the terminal device. Therefore, the terminal equipment can accurately judge which lane the user is located in based on the corresponding relation between the camera and the lane, and the terminal equipment receiving the lane information can navigate based on the accurate lane.
In one possible implementation manner, the acquiring, by the first server, a target lane in which the terminal device is located includes: the method comprises the steps that a first server obtains a plurality of first association relations based on a plurality of cameras shooting objects in a plurality of lanes of an intersection, wherein any one first association relation comprises an image and an identifier of a camera shooting the image; when the first server identifies the identification of the target object in the plurality of images, the first server determines a target camera corresponding to the target image comprising the identification of the target object; the first server determines a target lane where the target camera is located according to the second association relation; the second association relationship includes a correspondence relationship between the camera and the lane. Therefore, the first server can accurately judge which lane the user is located in based on the corresponding relation between the camera and the lane, and the terminal device can navigate based on the accurate lane sent by the first server.
In one possible implementation manner, the acquiring, by the first server, a target lane in which the terminal device is located includes: the first server sends a query request to the second server, the query request including an identification of the target object and any one of: position information of the target object or an identification of the intersection; the first server receives an identification of a target lane from the second server. Therefore, when the second server stores the corresponding relation between the camera and the lane, the second server can accurately judge which lane the user is located in based on the corresponding relation between the camera and the lane, and send the information of the lane to the first server, and then the terminal device can navigate based on the accurate lane sent by the first server.
In one possible implementation manner, the acquiring, by the first server, a target lane in which the terminal device is located includes: the first server sends a query request to the second server, the query request including an identification of the target object and any one of: position information of the target object or an identification of the intersection; the method comprises the steps that a first server receives an identification of a target camera from a second server, wherein the target camera is a camera for shooting a target object; the first server determines a target lane where the target camera is located according to the second association relation; the second association relationship includes a correspondence relationship between the camera and the lane. Therefore, when the first server stores the corresponding relation between the camera and the lane, the first server can accurately judge which lane the user is located in based on the corresponding relation between the camera and the lane, and then the terminal device can navigate based on the accurate lane sent by the first server.
In one possible implementation, the method further comprises: when the target lane is different from the lane indicated in the first navigation route, the first server sends a second navigation route to the terminal device according to the target lane and the destination. Therefore, the terminal equipment can provide a more accurate navigation route for the user according to different lanes determined in different scenes.
In a possible implementation manner, after the first server sends the second navigation route to the terminal device according to the target lane and the destination, the method further includes: when the first server receives the position information from the terminal equipment in the first time period, the first server continuously navigates the terminal equipment according to the second navigation route in the first time period; when the first server receives the position information from the terminal equipment after the first time period, the first server navigates the terminal equipment according to the position information of the terminal equipment received after the first time period. Therefore, more accurate lane information in the period of time can be obtained under different time conditions, and the navigation software can provide an accurate navigation route for a user based on the accurate lane information.
In one possible implementation, the method further includes: the first server sets a first weight for a lane indicated in the first navigation route and sets a second weight for a target lane according to the environment information; when the environment information shows that the environment is not beneficial to image recognition, the first weight is larger than the second weight; when the environment information indicates that the environment does not influence the image recognition, the first weight is smaller than the second weight; when the target lane is different from the lane indicated in the first navigation route, the first server transmits a second navigation route to the terminal device according to the target lane and a lane with a large weight in the lanes indicated in the first navigation route, and the destination. Therefore, different weights are set for the lanes determined by different devices, more accurate lane information can be obtained according to the weights under different application conditions, and the terminal device can provide accurate navigation routes for users based on the accurate lane information.
In one possible implementation, the method further includes: when the target lane is different from the lane indicated in the first navigation route and the distance between the target lane and the lane indicated in the first navigation route is greater than the distance threshold, the first server continuously navigates the terminal device according to the first navigation route. Therefore, the terminal equipment can judge the distance between the lanes obtained in different scenes and obtain more accurate lane information, and the navigation software can provide an accurate navigation route for a user based on the accurate lane information.
In one possible implementation manner, the identifier of the target object is a license plate number, and the terminal device is a mobile phone or a vehicle.
In a third aspect, an embodiment of the present application provides a positioning method, where the method includes: the terminal equipment sends an identifier, an initial position and a destination of a target object needing navigation to a first server; the terminal equipment receives a first navigation route from a first server; the first navigation route is related to a starting position and a destination; the method comprises the steps that in the process that the terminal device drives according to a first navigation route, the terminal device reports position information of the terminal device to a first server; when the position information reflects that the terminal equipment is about to drive into the intersection, the terminal equipment sends prompt information to the first server; the prompt information is used for prompting that the terminal equipment is about to drive into the intersection; the terminal equipment receives indication information used for indicating a target lane from a first server; and the terminal equipment prompts the user to be in the target lane according to the indication information. Therefore, the terminal equipment can accurately judge which lane the user is located in based on the corresponding relation between the camera and the lane, and the terminal equipment receiving the lane information can navigate based on the accurate lane.
In a fourth aspect, an embodiment of the present application provides a positioning apparatus, which is applied to a positioning system, where the positioning system includes: terminal equipment and first server, the device includes: the communication unit is used for sending the identification, the starting position and the destination of the target object needing navigation to the first server; the communication unit is also used for sending a first navigation route to the terminal equipment according to the starting position and the destination; the communication unit is also used for reporting the position information of the terminal equipment to the first server in the process that the terminal equipment drives according to the first navigation route; when the position information reflects that the terminal equipment is about to drive into the intersection, the processing unit is used for acquiring a target lane where the terminal equipment is located; the system comprises a road junction, a plurality of cameras, a plurality of image acquisition devices and a plurality of image processing devices, wherein the road junction is provided with the plurality of cameras which are used for shooting objects in different lanes in the road junction; the target lane is determined by the first server based on the content obtained by the camera, or the target lane is determined by the first server according to the information received from the second server; the communication unit is also used for sending indication information for indicating the target lane to the terminal equipment; and the processing unit is also used for prompting the target lane where the user is located according to the indication information.
In a possible implementation manner, the processing unit is specifically configured to obtain a plurality of first association relations based on a plurality of cameras shooting objects in a plurality of lanes at an intersection, where any one of the first association relations includes an image and an identifier of a camera shooting the image; the processing unit is further specifically configured to, when the identifier of the target object is identified in the multiple images, determine, by the first server, a target camera corresponding to the target image that includes the identifier of the target object; the processing unit is further specifically used for determining a target lane where the target camera is located according to the second association relation; the second association relationship includes a correspondence relationship between the camera and the lane.
In a possible implementation manner, the communication unit is specifically configured to send, to the second server, an inquiry request, where the inquiry request includes an identifier of the target object and any one of the following: position information of the target object or an identification of the intersection; the communication unit is further specifically configured to receive an identification of the target lane from the second server.
In a possible implementation manner, the communication unit is specifically configured to send an inquiry request to the second server, where the inquiry request includes an identifier of the target object and any one of the following: position information of the target object or an identification of the intersection; the communication unit is further specifically used for receiving an identifier of a target camera from the second server, wherein the target camera is a camera for shooting a target object; the communication unit is specifically used for determining a target lane where the target camera is located according to the second incidence relation; the second association relationship includes a correspondence relationship between the camera and the lane.
In one possible implementation, the communication unit is further configured to transmit a second navigation route to the terminal device according to the target lane and the destination when the target lane is different from a lane indicated in the first navigation route.
In a possible implementation manner, when the first server receives the location information from the terminal device within the first time period, the processing unit is specifically configured to continuously navigate the terminal device according to the second navigation route within the first time period; when the first server receives the location information from the terminal device after the first time period, the processing unit is further specifically configured to navigate the terminal device according to the location information of the terminal device received after the first time period.
In a possible implementation, the processing unit is further configured to set a first weight for a lane indicated in the first navigation route, and set a second weight for the target lane according to the environment information; when the environment information shows that the environment is not beneficial to image recognition, the first weight is larger than the second weight; when the environment information indicates that the environment does not affect the image recognition, the first weight is smaller than the second weight; when the target lane is different from the lane indicated in the first navigation route, the processing unit is further configured to transmit a second navigation route to the terminal device according to the target lane and the lane with a large weight in the lanes indicated in the first navigation route, and the destination.
In a possible implementation, when the target lane is different from the lane indicated in the first navigation route, and the distance between the target lane and the lane indicated in the first navigation route is greater than the distance threshold, the processing unit is further configured to continuously navigate the terminal device according to the second navigation route.
In one possible implementation manner, the identifier of the target object is a license plate number, and the terminal device is a mobile phone or a vehicle.
In a fifth aspect, an embodiment of the present application provides a positioning apparatus, including: the communication unit is used for receiving the identification, the starting position and the destination of a target object needing navigation from the terminal equipment; the communication unit is also used for sending a first navigation route to the terminal equipment according to the starting position and the destination; the communication unit is also used for receiving the position information of the terminal equipment in the process of driving the first navigation route; when the position information reflects that the terminal equipment is about to drive into the intersection, the processing unit is used for acquiring a target lane where the terminal equipment is located; the system comprises a road junction, a plurality of cameras, a plurality of image acquisition devices and a plurality of image processing devices, wherein the road junction is provided with the plurality of cameras which are used for shooting objects in different lanes in the road junction; the target lane is determined by the first server based on the content obtained by the camera, or the target lane is determined by the first server according to the information received from the second server; and the communication unit is also used for sending indication information for indicating the target lane to the terminal equipment.
In a possible implementation manner, the processing unit is specifically configured to obtain a plurality of first association relationships based on a plurality of cameras capturing objects in a plurality of lanes at an intersection, where any of the first association relationships includes an image and an identifier of a camera capturing the image; when the first server identifies the identifier of the target object in the plurality of images, the processing unit is further specifically configured to determine a target camera corresponding to the target image including the identifier of the target object; the processing unit is further specifically used for determining a target lane where the target camera is located according to the second association relation; the second association relationship includes a correspondence relationship between the camera and the lane.
In a possible implementation manner, the communication unit is specifically configured to send an inquiry request to the second server, where the inquiry request includes an identifier of the target object and any one of the following: position information of the target object or an identification of the intersection; and a communication unit, specifically configured to receive an identification of the target lane from the second server.
In a possible implementation manner, the communication unit is specifically configured to send an inquiry request to the second server, where the inquiry request includes an identifier of the target object and any one of the following: position information of the target object or an identification of the intersection; the communication unit is further specifically used for receiving an identifier of a target camera from the second server, wherein the target camera is a camera for shooting a target object; the processing unit is specifically used for determining a target lane where the target camera is located according to the second incidence relation; the second association relationship includes a correspondence relationship between the camera and the lane.
In one possible implementation, the communication unit is further configured to transmit a second navigation route to the terminal device according to the target lane and the destination when the target lane is different from a lane indicated in the first navigation route.
In a possible implementation manner, when the first server receives the location information from the terminal device within the first time period, the processing unit is specifically configured to continuously navigate the terminal device according to the second navigation route within the first time period; when the first server receives the location information from the terminal device after the first time period, the processing unit is further specifically configured to navigate the terminal device according to the location information of the terminal device received after the first time period.
In one possible implementation, the processing unit is further configured to set a first weight for a lane indicated in the first navigation route, and set a second weight for the target lane according to the environment information; when the environment information shows that the environment is not beneficial to image recognition, the first weight is larger than the second weight; when the environment information indicates that the environment does not affect the image recognition, the first weight is smaller than the second weight; when the target lane is different from the lane indicated in the first navigation route, the communication unit is further configured to transmit a second navigation route to the terminal device according to the target lane and a lane with a large weight among the lanes indicated in the first navigation route, and the destination.
In a possible implementation, the processing unit is further configured to continue to navigate the terminal device according to the second navigation route when the target lane is different from the lane indicated in the first navigation route and a distance between the target lane and the lane indicated in the first navigation route is greater than a distance threshold.
In one possible implementation manner, the identifier of the target object is a license plate number, and the terminal device is a mobile phone or a vehicle.
In a sixth aspect, an embodiment of the present application provides a positioning apparatus, including: the communication unit is used for sending the identification, the starting position and the destination of the target object needing navigation to the first server; the communication unit is also used for receiving a first navigation route from the first server; the first navigation route is related to a starting position and a destination; the communication unit is also used for reporting the position information of the terminal equipment to the first server in the process that the terminal equipment drives according to the first navigation route; when the position information reflects that the terminal equipment is about to drive into the intersection, the communication unit is also used for sending prompt information to the first server; the prompt information is used for prompting that the terminal equipment is about to drive into the intersection; the communication unit is also used for receiving indication information used for indicating the target lane from the first server; and the processing unit is used for prompting the user to be in the target lane according to the indication information.
In a seventh aspect, an embodiment of the present application provides a positioning apparatus, including a processor and a memory, where the memory is used to store code instructions; the processor is configured to execute the code instructions to cause the electronic device to perform the positioning method as described in the first aspect or any implementation of the first aspect, the positioning method as described in the second aspect or any implementation of the second aspect, or the positioning method as described in the third aspect or any implementation of the third aspect.
In an eighth aspect, embodiments of the present application provide a computer-readable storage medium storing instructions that, when executed, cause a computer to perform a positioning method as described in the first aspect or any implementation manner of the first aspect, a positioning method as described in the second aspect or any implementation manner of the second aspect, or a positioning method as described in the third aspect or any implementation manner of the third aspect.
A ninth aspect is a computer program product comprising a computer program which, when executed, causes a computer to perform the positioning method as described in the first aspect or any implementation of the first aspect, the positioning method as described in the second aspect or any implementation of the second aspect, or the positioning method as described in the third aspect or any implementation of the third aspect.
It should be understood that the fourth aspect to the ninth aspect of the present application correspond to the technical solutions of the first aspect to the third aspect of the present application, and the beneficial effects obtained by the aspects and the corresponding possible embodiments are similar, and are not described again.
Drawings
Fig. 1 is a schematic view of a scenario provided in an embodiment of the present application;
fig. 2 is a schematic frame diagram of a terminal device 200 according to an embodiment of the present application;
fig. 3 is a schematic diagram of a navigation system 300 according to an embodiment of the present application;
fig. 4 is a schematic view of a scene based on positioning by a navigation server according to an embodiment of the present application;
fig. 5 is a schematic view of a scenario based on positioning of a navigation server and a traffic platform server according to an embodiment of the present application;
fig. 6 is a schematic flowchart of positioning based on a navigation server according to an embodiment of the present application;
fig. 7 is a schematic view of an interface for inputting license plate information according to an embodiment of the present disclosure;
fig. 8 is a schematic interface diagram illustrating a road surface layer according to an embodiment of the present disclosure;
fig. 9 is a schematic interface diagram illustrating another road surface layer according to an embodiment of the present disclosure;
fig. 10 is a schematic view of an interface reported by a user according to an embodiment of the present application;
fig. 11 is a schematic flowchart of positioning based on a navigation server and a traffic platform server according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a positioning apparatus according to an embodiment of the present application;
fig. 13 is a schematic hardware structure diagram of a control device according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of a chip according to an embodiment of the present application.
Detailed Description
In order to facilitate clear description of technical solutions of the embodiments of the present application, in the embodiments of the present application, words such as "first" and "second" are used to distinguish identical items or similar items with substantially the same functions and actions. For example, the first value and the second value are only used to distinguish different values, and the order of the values is not limited. Those skilled in the art will appreciate that the terms "first," "second," and the like do not denote any order or importance, but rather the terms "first," "second," and the like do not denote any order or importance.
It is noted that, in the present application, words such as "exemplary" or "for example" are used to mean exemplary, illustrative, or descriptive. Any embodiment or design described herein as "exemplary" or "such as" is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
In the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of the singular or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or multiple.
With the development of urban traffic, the number of overpasses and tunnels is more and more, and the construction of the overpasses and tunnels is more and more complex. Such as overpasses comprising multiple layers of road surfaces. The overpass brings convenience to traffic and challenges to road navigation. Generally, when a user drives into an overpass by using a navigation, the overpass is provided with multiple layers of road surfaces at the same position, so that the navigation cannot distinguish which layer of the overpass the user is located at currently. It may happen that the user has driven into the wrong road layer, but the navigation still indicates a route according to the user driving on the right road layer.
It can be understood that the overpass of the embodiment of the present application may also be replaced with a road including a main road and a secondary road, the main road and the secondary road may be on the same layer, or may be on different layers, and in a general situation, the navigation cannot distinguish the main road and the secondary road, so that the accurate navigation cannot be realized for the user. Suitably, the road layer may also be referred to as a road layer. The pavement layer may be used to represent the pavement of different layers of a multi-layer pavement; alternatively, the road surface layer may represent different roads in the same road surface layer, for example, a main road and a sub road in adjacent roads, which may be represented by different road surface layers.
For convenience of description, overpasses and road surface layers are described in the following, and the description does not constitute a specific limitation on the scene.
Exemplarily, fig. 1 is a schematic view of a scenario provided in an embodiment of the present application. As shown in fig. 1, the scene includes an overpass 100. The overpass 100 includes a plurality of roads: such as road 101, road 102, and road 103. Wherein, the road 101 can be a one-layer road for driving into the overpass; the road 102 may be a two-level road; the road 103 may be a three-level road. For example, the vehicle 104 may enter the overpass 100 along the road 101, the vehicle 104 may enter the road 102 along the direction indicated by the arrow (c), or the vehicle 104 may enter the road 103 along the direction indicated by the arrow (r).
On the road 101, the vehicle 104 may travel based on the route indicated by the navigation 106. The navigation 106 may be: vehicle navigation or mobile phone navigation, etc.
The overpass 100 may include a plurality of cameras therein, such as a camera 112 provided on a special camera fixing bar, a camera 108 provided on a street lamp, a camera 110 provided on a billboard, or a camera 111 provided under a bridge, etc. It can be understood that the camera may be disposed at other positions according to an actual scene, which is not limited in this embodiment of the application.
Illustratively, in the scenario corresponding to fig. 1, the user drives the vehicle 104 into the overpass 100 along the road 101 according to the route indicated by the navigation 106, and the user should drive the vehicle 104 on the road 103 according to the route indicated by the navigation 106, but the user drives the vehicle 104 on the road 102. The navigation 106 should find the user's driving route error in time, and the navigation 106 can re-plan the route for the user according to the road 102. However, since the navigation 106 cannot distinguish which layer of the overpass the user is currently located on, and thus cannot recognize that the user has gone wrong way, for example, the user has driven the vehicle 104 to the road 102, since the road 102 and the road 103 have almost the same position in the latitude and longitude positioning or the GPS positioning, the navigation cannot recognize that the user does not travel on the road 103 indicated by the navigation, and thus the navigation cannot provide the user with an accurate route based on a correct road layer.
The prior art provides an intelligent overpass navigation method based on RFID. Specifically, the overpass may be provided with an RFID tag, and when the vehicle determines that the current road is an overpass road (or a road with other multi-layer roads), the vehicle may receive information sent by the RFID tag provided on the overpass using a radio frequency antenna, where the signal may be a radio frequency signal with a current road number, so that the vehicle may accurately know a road layer where the vehicle is located, and navigate based on the acquired GPS positioning signal, the road number, and the driving route.
However, the above method has the following problems: firstly, when the GPS is used for positioning, as the positioning accuracy of a GPS positioning signal is about ten meters, inaccurate navigation and even wrong navigation can occur at the dense part of the overpass; secondly, the method of arranging the radio frequency antenna at the bottom of the vehicle and arranging the RFID tag below the road ground needs to lay the RFID on the road surface of the overpass, and under the condition, the RFID is easily crushed by the vehicles coming and going, the work load of laying the RFID is large, the existing road surface is easily damaged, and the cost is high; thirdly, the identification width of the RFID is limited, and the vehicle is far away from the RFID tag in the road, so that the situation that the vehicle cannot identify the RFID tag on the road surface to transmit signals may occur.
In view of this, an embodiment of the present application provides a positioning method, which may make full use of cameras disposed in a multilayer road surface, and accurately determine which layer of the multilayer road the user is located on based on a corresponding relationship between the cameras in the multilayer road and a road surface layer, so that a terminal device receiving information of the road surface layer may perform navigation based on an accurate road surface layer. The terminal device may be a vehicle or a mobile phone with navigation capability.
It is understood that the terminal equipment may also be referred to as a terminal (terminal), a User Equipment (UE), a Mobile Station (MS), a Mobile Terminal (MT), and the like. The terminal device may be a mobile phone (mobile phone), a smart tv, a wearable device, a tablet computer (Pad), a computer with a wireless transceiving function, a Virtual Reality (VR) terminal device, an Augmented Reality (AR) terminal device, a wireless terminal in industrial control (industrial control), a wireless terminal in self-driving (self-driving), a wireless terminal in remote surgery (remote medical supply), a wireless terminal in smart grid (smart grid), a wireless terminal in transportation safety (transportation safety), a wireless terminal in smart city (smart city), a wireless terminal in smart home (smart home), and so on. The embodiment of the present application does not limit the specific technology and the specific device form adopted by the terminal device.
In order to better understand the embodiments of the present application, the following describes the structure of the terminal device according to the embodiments of the present application.
Fig. 2 is a schematic structural diagram of a terminal device 200 according to an embodiment of the present disclosure.
In the embodiment of the present application, the terminal device 200 includes a GPS positioning module 180L, and the positioning module 180L corresponds to navigation software in the terminal device. For example, the GPS positioning module 180L may locate the current position of the terminal device, and the navigation software may present the result of this location to the user. The GPS positioning module and the navigation software can be vehicle-mounted GPS positioning module and navigation software, and can also be a user mobile terminal GPS positioning module and navigation software. The navigation software may include: baidu navigation software or Gode navigation software, etc.
As shown in fig. 2, the terminal device 200 may include a processor 110, an external memory interface 120, an internal memory 121, a power management module 141, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, a sensor module 180, a key 190, a camera 193, a display screen 194, and the like. Wherein the sensor module 180 may include: a pressure sensor 180A, an acceleration sensor 180E, a fingerprint sensor 180H, a touch sensor 180K, and a positioning module 180L.
It is to be understood that the illustrated structure of the embodiment of the present application does not constitute a specific limitation to the terminal device 200. It will be appreciated that terminal device 200 may include more or fewer components than illustrated, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The power management module 141 receives input from the battery 142 and/or the charge management module 140, and supplies power to the processor 110, the internal memory 121, the display 194, the camera 193, the wireless communication module 160, and the like.
The wireless communication function of the terminal device 200 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. The antennas in terminal device 200 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network.
The mobile communication module 150 may provide a solution including wireless communication of 2G/3G/4G/5G, etc. applied on the terminal device 200. The wireless communication module 160 may provide a solution for wireless communication applied to the terminal device 200, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), Global Navigation Satellite Systems (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like.
In some embodiments, antenna 1 of terminal device 200 is coupled to mobile communication module 150 and antenna 2 is coupled to wireless communication module 160 so that terminal device 200 can communicate with networks and other devices through wireless communication techniques. The wireless communication technologies may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Long Term Evolution (LTE), BT, GNSS, WLAN, NFC, FM, and/or IR technologies, among others.
The terminal device 200 realizes a display function through the display screen 194. The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. In some embodiments, the terminal device 200 may include 1 or N display screens 194, where N is a positive integer greater than 1.
The terminal apparatus 200 may implement a photographing function through the camera 193 or the like. The camera 193 is used to capture still images or video. In some embodiments, the terminal device 200 may include 1 or N cameras 193, N being a positive integer greater than 1.
The external memory interface 120 may be used to connect an external memory card to extend the memory capability of the terminal device 200. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, and the like) required by at least one function, and the like.
The pressure sensor 180A is used for sensing a pressure signal, and converting the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194.
The acceleration sensor 180E can detect the magnitude of acceleration of the terminal device 200 in various directions (generally, three axes).
The fingerprint sensor 180H is used to collect a fingerprint. The terminal device 200 can utilize the collected fingerprint characteristics to realize fingerprint unlocking, access to an application lock, fingerprint photographing, fingerprint incoming call answering and the like.
The touch sensor 180K is also called a "touch device". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation acting thereon or nearby.
The keys 190 include a volume key and the like. The keys 190 may be mechanical keys. Or may be touch keys. The terminal device 200 may receive a key input, and generate a key signal input related to user setting and function control of the terminal device 200.
The sensor module 180 may also include a positioning module 180L. For example, the positioning module may perform positioning based on a GPS system, or may perform positioning based on a beidou system or other positioning systems. The location module 180L may be used to estimate the geographic location of the terminal device 200.
For example, fig. 3 is a schematic structural diagram of a navigation system 300 according to an embodiment of the present application. As shown in fig. 3, the navigation system 300 may include: camera 301, navigation server 302. Optionally, the navigation system 300 may also include a transportation platform server 303 and other devices.
The camera 301 may be used to take a picture of the vehicle. Specifically, the camera 301 may shoot a vehicle running on a multilayer road surface, or perform image processing on a picture of the vehicle, so as to identify license plate information of the vehicle, and upload the license plate information to a server. Optionally, when the camera 301 does not have an image processing function, the camera 301 may also upload the taken picture of the vehicle to a server, and then the server performs image processing on the picture of the vehicle to recognize the card information.
The camera 301 may be a camera arranged in the road surface for capturing and detecting the violations, and the number of the cameras may be one or more.
The navigation server 302 may be used to implement the functions of storing, processing, receiving, and transmitting navigation-related data. For example, the navigation server 302 may be a server belonging to a navigation software company such as Baidu or God. Specifically, the navigation server 302 may store a correspondence between camera numbers in a multilayer road surface and a road surface layer, and determine the road surface layer where the vehicle is located according to the correspondence between the camera numbers and the road surface layer.
The traffic platform server 303 may be used to collect and store photographs taken by the cameras 301, for example the traffic platform server 303 may be a server belonging to a traffic authority. Specifically, the traffic platform server 303 may also store a correspondence between camera numbers in a multilayer road surface and the road surface layer, and determine the road surface layer where the vehicle is located according to the correspondence between the camera numbers and the road surface layer.
It is understood that the navigation system 300 may include other contents according to actual scenes, which is not limited in the embodiments of the present application.
In order to better understand the method of the embodiment of the present application, an application scenario to which the embodiment of the present application is applied is first described below.
In a possible implementation manner, the positioning method provided by the embodiment of the present application may be applied to various scenarios. The plurality of scenarios may include scenario one: a scenario (e.g., a scenario corresponding to fig. 4) for implementing positioning based on the navigation server 302; and scenario two: a scene (such as a scene corresponding to fig. 5) for realizing positioning based on the navigation server 302 and the traffic platform server 303, and the like.
Scene one: the scenario of positioning is implemented based on the navigation server 302.
For example, fig. 4 is a schematic view of a scene based on positioning by a navigation server according to an embodiment of the present application. As shown in fig. 4, this scenario may include: the multilayer road surface includes pavement layers, such as pavement layer I, pavement layer II, pavement layer III, etc. The scene can also comprise: a vehicle 401, a GPS positioning module 402 of the vehicle 401, a navigation server 302, an acquisition module 403, an acquisition module 404, an acquisition module 405, and the like. The vehicle 401 includes a license plate, and the license plate is used for identifying the vehicle 401. The acquisition module 403 acquires license plate information in the pavement layer (i), the acquisition module 404 acquires license plate information in the pavement layer (ii), and the acquisition module 405 acquires license plate information in the pavement layer (iii).
For example, when the vehicle 401 travels to the intersection of the multi-layer road surface, the GPS positioning module 402 of the vehicle 401 may recognize that the vehicle will enter the multi-layer road surface and upload the position information of the vehicle 401 to the navigation server 302, or the vehicle 401 may report the position information of the vehicle 401 to the navigation server, and the navigation server may recognize that the vehicle will enter the multi-layer road surface. The vehicle 401 continues to run, when the vehicle 401 runs to the road surface layer @, the acquisition module 404 may capture a picture of the vehicle 401 and recognize license plate information corresponding to the vehicle 401, and the acquisition module 404 may upload a number corresponding to the acquisition module 404 and license plate information corresponding to the vehicle 401 to the navigation server 302. Furthermore, the navigation server 302 may determine a road layer, for example, a road layer [ ] where the vehicle 401 is located according to the number corresponding to the collection module 404, and send information of the road layer to the navigation software corresponding to the vehicle 401, and then the navigation software may update the navigation route according to the accurate road layer, for example, the navigation software indicates that the vehicle 401 may travel in the direction indicated by the arrow [ ] or in the direction indicated by the arrow [ ] according to the accurate road layer.
Scene two: the positioning scene is realized based on the navigation server 302 and the traffic platform server 303.
For example, fig. 5 is a schematic view of a scenario based on positioning of a navigation server and a traffic platform server according to an embodiment of the present application. As shown in fig. 5, this scenario may include: the multi-layer road surface includes a road surface layer I, a road surface layer II, a road surface layer III and the like. The scene can also comprise: a vehicle 401, a GPS positioning module 402 of the vehicle 401, a navigation server 302, a traffic platform server 303, an acquisition module 403, an acquisition module 404, an acquisition module 405, and the like. The vehicle 401 includes a license plate. The acquisition module 403 acquires license plate information in the pavement layer (i), the acquisition module 404 acquires license plate information in the pavement layer (ii), and the acquisition module 405 acquires license plate information in the pavement layer (iii).
For example, when the vehicle 401 travels to the first 100m of the intersection of the multi-layer road surface, the GPS positioning module 402 of the vehicle 401 may recognize that the vehicle is about to enter the multi-layer road surface, or the vehicle 401 may report the position information of the vehicle 401 to the navigation server, which may recognize that the vehicle is about to enter the multi-layer road surface and trigger the navigation server 302 to send the query request to the traffic platform server 303. The traffic platform server 303 receives the query request, obtains a license plate sequence (the license plate sequence is a collected license plate set) collected by the collection modules on the multi-layer pavement within a period of time, and obtains the number of a collection module corresponding to the license plate of the collection vehicle 401 in the license plate sequence, for example, the collection module 404. The traffic platform server 303 may determine a road layer, for example, a road layer &, where the vehicle 401 is located according to the number corresponding to the acquisition module 404, and send information of the road layer to the navigation software corresponding to the vehicle 401, and then the navigation software may update the navigation route according to the accurate information of the road layer, for example, the vehicle 401 may travel in the direction indicated by the arrow &, or may travel in the direction indicated by the arrow &.
For the first scenario, exemplarily, fig. 6 is a schematic flowchart of a navigation server positioning-based process provided in an embodiment of the present application. In the embodiment corresponding to fig. 6, the acquisition module is exemplified as a camera. The acquisition module 403 in fig. 4 can also be understood as a camera 403, the acquisition module 404 in fig. 4 can also be understood as a camera 404, the acquisition module 405 in fig. 4 can also be understood as a camera 405, and the vehicle 401 in fig. 4 can also be understood as the terminal 200. For example, the method based on navigation server positioning may comprise the following steps:
s601, the terminal 200 acquires the license plate information and sends the license plate information to the navigation server 302.
Suitably, the navigation server 302 may receive the license plate information sent by the terminal 200.
In the embodiment of the application, the license plate information can be a license plate number and the like; the terminal 200 may be a vehicle, or a mobile phone. The terminal 200 includes navigation software. The method for acquiring the license plate information by the terminal 200 may be that the terminal 200 acquires the license plate information input into the navigation software in the terminal 200 by the user.
Fig. 7 is a schematic diagram of an interface for inputting license plate information according to an embodiment of the present disclosure. In the embodiment corresponding to fig. 7, the terminal 200 is taken as an example for illustration, and the example does not limit the embodiment of the present application.
For example, when the user opens the navigation software in the mobile phone, the mobile phone may display an interface as shown in a in fig. 7, where the interface may include a license plate setting control 701 for inputting license plate information, and the like. When the user triggers the set license plate control 701 in the interface shown as a in fig. 7, the navigation software may jump from the interface shown as a in fig. 7 to the interface shown as b in fig. 7. In the interface shown as b in fig. 7, the user can enter license plate information in please fill out your license plate 702. In response to the operation of inputting the license plate information by the user, the mobile phone may receive the license plate information input by the user and send the license plate information to the navigation server 302. The license plate information is, for example: a 12345.
S602, the terminal 200 sends the GPS positioning information to the navigation server 302.
Suitably, the navigation server 302 may receive GPS positioning information transmitted by the terminal 200.
In the embodiment of the present application, the GPS positioning information may be used to identify the location of the terminal 200, or to determine the location of the multi-layer pavement on which the terminal 200 is located, and the GPS positioning information may be generated by a GPS positioning module in the terminal 200. For example, the terminal 200 may transmit GPS positioning information acquired in real time to the navigation server 302; accordingly, the navigation server 302 can locate the terminal 200 in real time, and then determine the location of the terminal 200.
It is understood that, at this time, the navigation server 302 may store the corresponding relationship between the license plate information obtained from S601 and the GPS positioning information obtained from S602.
For example, in the scenario shown in fig. 4, when the terminal 200 enters the intersection of the multi-layer road surface, the GPS module in the terminal 200 may recognize that the vehicle enters the multi-layer road surface, and transmit the position of the vehicle to the navigation server 302 in real time. At this point the vehicle may continue to travel along the route indicated by the navigation software in terminal 200.
S603, the navigation server 302 acquires the license plate picture shot by the camera and the camera number corresponding to the license plate picture.
In the embodiment of the application, the license plate photo can be used for identifying a vehicle. The license plate sheet can be a picture which is shot by a camera and contains license plate information. Wherein, this camera can be at least one camera in the multilayer road surface.
For example, the navigation server 302 may obtain license plate photos taken by a plurality of cameras (e.g., the camera 403, the camera 404, and the camera 405 in the scene shown in fig. 4), and camera numbers corresponding to the license plate photos. The camera in the embodiment of the application can have image recognition capability or not, wherein the image recognition capability is used for carrying out image recognition on the shot license plate photo to recognize accurate license plate information. The navigation server 302 may control the camera to take the license plate photos, or the navigation server 302 may obtain the photos required by the continuously taken camera uploaded to the navigation server 302.
In one implementation, when the camera has an image recognition function, the camera on the multilayer road surface can capture multiple license plate pictures, and recognize license plate information in the license plate pictures based on an image processing module in the camera, and upload the license plate information and a camera number corresponding to the license plate information to the navigation server 302, and the subsequent navigation server 302 can execute the step shown in S605.
In another implementation, when the camera does not have an image recognition function, the camera on the multilayer road surface may take multiple license plate photos, and upload the license plate photos and camera numbers corresponding to the license plate photos to the navigation server 302, the navigation server 302 has an image recognition capability, and the subsequent navigation server 302 may perform the step shown in S604.
S604, the navigation server 302 performs image processing on the license plate photo to obtain license plate information.
S605, the navigation server 302 determines the road surface layer corresponding to the vehicle according to the license plate information and the camera number.
In this embodiment, the navigation server 302 may store a corresponding relationship between the camera number and the road layer. In the scene shown in fig. 4, the corresponding relationship between the camera and the road surface layer is shown in table 1 below:
TABLE 1 schematic table corresponding to camera number and pavement layer
Camera number | Pavement layer |
Camera 403 | ① |
Camera 404 | ② |
Camera 405 | ③ |
The camera 403 corresponds to the road surface layer (i), the camera 404 corresponds to the road surface layer (ii), and the camera 405 corresponds to the road surface layer (iii).
For example, the navigation server 302 may obtain a plurality of sets of license plate information and corresponding relationships between the numbers of the cameras. Further, the navigation server 302 may determine, according to the license plate information uploaded in the step shown in S601, for example, 12345, a camera number corresponding to the 12345, for example, the camera 404, in the correspondence between the plurality of sets of license plate information and the camera numbers. Further, as shown in table 1, the navigation server may determine that the terminal 200 is located at the second road surface according to the camera 404.
S606, the navigation server 302 sends the road-surface layer information to the terminal 200.
Suitably, the terminal 200 may determine whether the road layer information transmitted by the navigation server 302 is received.
In one implementation, if the terminal 200 receives the road layer information, the navigation software in the terminal 200 may display the road layer information. The pavement layer information is used to indicate the level of the pavement in the multi-layer pavement, and the pavement layer information may be in other forms such as pavement layer number, for example.
Exemplarily, fig. 8 is a schematic interface diagram for displaying a road surface layer according to an embodiment of the present disclosure. In the embodiment corresponding to fig. 8, the terminal 200 is taken as an example for illustration. When the mobile phone receives the information that the mobile phone is located in the second road layer sent by the navigation server, the navigation software in the mobile phone can display an interface as shown in fig. 8. As shown in fig. 8, the information that the current vehicle is located in the second half screen of the mobile phone is displayed in the indication information 801 of the left half screen of the mobile phone, and the route corresponding to the second half screen of the mobile phone is displayed.
In another implementation, if the terminal 200 does not receive the road layer information, the navigation software in the terminal 200 may continue to navigate according to the original navigation algorithm.
For example, fig. 9 is a schematic interface diagram for displaying a road surface layer according to another embodiment of the present application. In the embodiment corresponding to fig. 9, the terminal 200 is taken as an example for illustration. When the mobile phone does not receive the road surface layer information, the navigation software in the mobile phone can display the interface of the original navigation route shown in fig. 9, such as the navigation route indicated along the road surface layer. As shown in fig. 9, the direction in which the vehicle is currently driving and the description of the driving direction can be displayed in the left half screen of the mobile phone, and the route corresponding to the road layer (r) indicated by the original navigation algorithm can be displayed in the right half screen of the mobile phone.
Based on this, in the scene of multilayer road surface, the navigation server can realize the accurate location of road surface layer according to the corresponding relation of camera and road surface layer, and then provides more accurate navigation route for the user, and this application embodiment can make full use of equipment such as navigation server current function, need not to set up new server, reduces implementation cost.
Based on the embodiment corresponding to fig. 6, in a possible implementation manner, after the road layer is determined in S605, the update of the road layer may be implemented based on the following methods, or the current road layer may be maintained.
In one implementation, the navigation software may implement an update of the road layer or maintain the current road layer based on the weight of the road layer.
For example, after the camera is used to locate the road layer where the terminal is located, the navigation server may set a higher weight for the road layer determined by the camera; when the GPS positioning information of the terminal indicates that the vehicle in which the terminal is located on another road layer, the navigation server may set a lower weight for the road layer indicated by the GPS positioning information. After the navigation software receives the road layer determined based on the camera, the road layer indicated by the GPS positioning information is received, and the weight of the road layer determined by the GPS positioning information is lower than that of the road layer determined by the camera, so that the subsequent navigation software can take the road layer information determined by the camera as the reference and can ignore the road layer information indicated by the GPS positioning information.
Based on the method, different weights are set for the road surface layers determined by different devices, more accurate road surface layer information can be obtained according to the weights under different application conditions, and then the navigation software can provide more accurate navigation routes for users based on the accurate road surface layer information.
In another implementation, the navigation software may implement updates to the road layer or maintain the current road layer based on time.
For example, after a road layer where the terminal is located, for example, the road layer i, is located by using the camera, the navigation software may take the road layer i as a reference within a certain time threshold, and the road layer i is not updated. When the time threshold is exceeded, the navigation software may re-request to acquire information of the road layer in which the vehicle is located. For example, the navigation server may set a validity time for a road surface layer when the road surface layer is obtained. If the effective time is set for the pavement layer I, the effective time is 1 minute. When the navigation software of the terminal acquires the information which is determined based on the camera and is currently located in the road surface layer I, the navigation software does not need to update the road surface layer I within 1 minute of the effective time of the road surface layer I. It can be understood that, within the 1 minute, even if the navigation software receives the road surface layer information sent by the other device, the road surface layer (r) is not updated. When the valid time of the road surface layer is exceeded, the navigation software can take the received new road surface layer information as the standard.
For another example, the navigation software may receive road layer information indicated by the GPS positioning module, and receive road layer information determined based on the camera. Due to transmission delay, when the navigation software receives the road surface layer information indicated by the GPS positioning module firstly, the road surface layer is a second road surface layer, and the original road surface layer is updated to the second road surface layer; and then receiving the road surface layer information determined based on the camera as a road surface layer II, wherein the received road surface layer information determined based on the camera may not be accurate enough because the vehicle may have traveled a distance, so that the road surface layer information determined by the camera can be discarded, and the current road surface layer is maintained.
Based on the above, different time is set for the road layers determined by different devices, so that more accurate road layer information in the time can be obtained under different time conditions, and the navigation software can provide more accurate navigation routes for users based on the accurate road layer information.
In another implementation, the navigation software may update the road layer based on the road layer reported by the user at the terminal.
Fig. 10 is a schematic view of an interface reported by a user according to an embodiment of the present application. In the embodiment corresponding to fig. 10, the terminal 200 is taken as an example for explanation. Since the user can determine which of the multiple layers of the road surface the vehicle is driven on. Therefore, when the GPS module in the mobile phone detects that the vehicle is driving on a multilayer road surface, a prompt message may be sent to the user, such as an interface shown in a in fig. 10, a request to select a current driving road surface layer may be displayed in the indication information 1001 of the left half screen of the mobile phone, and a navigation route corresponding to the road surface layer (ii) indicated by the navigation software before the user reports the road surface layer information is not received is displayed in the right half screen of the mobile phone. When a user triggers a control corresponding to the road surface layer (r), in response to a triggering operation of the user, the navigation software in the mobile phone may be switched from the interface shown in a in fig. 10 to the interface shown in b in fig. 10. In the interface shown as b in fig. 10, the indication information 1002 of the left half screen of the mobile phone may display: the information of the route corresponding to the first road layer is switched to for you, and the navigation route corresponding to the first road layer indicated by the navigation software after the information of the first road layer reported by the user is received is displayed in the right half screen of the mobile phone.
Based on the above, the navigation software can obtain the current more accurate road layer information based on the road layer information reported by the user in different scenes, and then the navigation software can provide a more accurate navigation route for the user.
For the second scenario, exemplarily, fig. 11 is a schematic flowchart of positioning based on a navigation server and a traffic platform server according to an embodiment of the present application. In the embodiment corresponding to fig. 11, the acquisition module is exemplified as a camera. The acquisition module 403 in fig. 5 can also be understood as a camera 403, the acquisition module 404 in fig. 5 can also be understood as a camera 404, the acquisition module 405 in fig. 5 can also be understood as a camera 405, and the vehicle 401 in fig. 5 can also be understood as the terminal 200. For example, the method for positioning based on the navigation server and the traffic platform server can comprise the following steps:
s1101, the terminal 200 acquires the license plate information and sends the license plate information to the navigation server 302.
It is understood that S1101 is similar to the step shown in S601 in the corresponding embodiment of fig. 6, and is not repeated herein.
S1102, the terminal 200 transmits the GPS positioning information to the navigation server 302.
The navigation server 302 may determine the location of the terminal 200 or the location of the vehicle, such as on which overpass, tunnel, or multi-layer road surface the vehicle is located, based on the positioning information.
S1103 and when the navigation software in the terminal 200 recognizes that the vehicle reaches the intersection of the multi-layer road surface by 100 meters (m), the initial time is set to t 0.
The navigation software may also set the initial time based on the time when the vehicle reaches the Nm layer of the road surface. Wherein N may be a positive integer.
Illustratively, the navigation software in the terminal 200 can recognize, through the GPS positioning module, that the vehicle is only 100 meters away from the camera position at the multi-level road intersection, which is set to t0, and send a trigger signal to the navigation server 302.
S1104, the navigation software in the terminal 200 may trigger the navigation server 302 to send a query request to the transportation platform server 303.
Suitably, the traffic platform server 303 receives the query request sent by the navigation server 302. The query request may include license plate information and/or GPS positioning information of the vehicle.
In a possible implementation manner, the query request may also include position information of a plurality of road surfaces, a number of a plurality of road surfaces (or called overpass identifiers), and the like. For example, the navigation server 302 may determine the position of the multi-layer road surface or the multi-layer road surface number based on the GPS positioning information of the terminal 200.
In a possible implementation manner, the query request may also include a camera number. For example, when the navigation server 302 stores the numbers of the cameras in the multilayer road surface, the navigation server 302 may directionally query which camera in the multilayer road surface captures the obtained license plate photograph or license plate sequence according to the GPS positioning information, so as to obtain the number of the camera.
S1105, the traffic platform server 303 calls a license plate sequence A obtained by shooting by all cameras at the position of the multilayer pavement within (t0+3) S- (t0+9) S according to the position information of the multilayer pavement, compares the obtained license plate information with the license plate sequence A, and obtains the camera number or the pavement layer information corresponding to the license plate information.
The license plate sequence a may be a sequence corresponding to a plurality of license plate information obtained by processing images of license plate pictures obtained by shooting in (t0+3) s- (t0+9) s through all cameras in a position of a multilayer road surface. For example, in a scene corresponding to fig. 5, the traffic platform server 303 may obtain a license plate sequence a captured by the camera 403, the camera 404, and the camera 405.
In the embodiment of the present application, the time range may be determined, for example, that the vehicle travels at a speed of 60 km/h (or 16.7 m/s), and when the vehicle is 100m ahead of the camera at time t0, and passes 6s, the vehicle may travel to the position where the overpass camera is located, and at this time, at time (t0+6) s, the camera completes shooting; due to the fact that the speed of the vehicle is high and low, a part of margin can be reserved in the front and the back, the camera can shoot most of the time, and therefore the time range can be set to be (t0+3) s- (t0+9) s. It is understood that the time range may include other contents according to actual scenarios, which is not limited in the embodiments of the present application. For example, the time range may be (t0+4) s- (t0+8) s, or the like.
In the embodiment of the present application, the traffic platform server 303 or the navigation server 302 may obtain the road layer information of the terminal 200.
In one implementation, when the traffic platform server 303 stores the correspondence between the camera number and the road surface layer, the method for acquiring the road surface layer information may be that the traffic platform server 303 may compare the license plate sequence a with the license plate information acquired in S1104, if the license plate information is found from the license plate sequence a, the traffic platform server 303 may query the camera number corresponding to the license plate information, determine the road surface layer information corresponding to the camera number based on the correspondence between the camera number stored in the traffic platform server 303 and the road surface layer, and subsequently may execute the step shown in S1106 to send the road surface layer information to the navigation server 302.
In another implementation, when the navigation server 302 stores the correspondence between the camera number and the road surface layer, the method for acquiring the road surface layer information may be that the traffic platform server 303 may compare the license plate sequence a with the license plate information acquired in S1104, if the license plate information is found from the license plate sequence a, the traffic platform server 303 may query the camera number corresponding to the license plate information, and may perform the step shown in S1106 to send the camera number to the navigation server 302.
Further, the navigation server 302 may determine a road layer where the vehicle is located based on the correspondence between the camera number and the road layer, and may subsequently transmit the road layer information to the terminal 200.
If the traffic platform server 303 compares the license plate sequence a with the license plate information obtained in S1104, and the license plate information is not found from the license plate sequence a, the empty information may be returned to the terminal 200. Suitably, after the terminal 200 receives the null information, the navigation software in the terminal 200 can continue to navigate according to the original navigation algorithm.
S1106, the traffic platform server 303 sends the camera number or the road layer information to the navigation server 302.
Suitably, the navigation server 302 may receive the camera number or the road layer information sent by the transportation platform server 303.
In one implementation, when the navigation server 302 receives the camera number sent by the traffic platform server 303, the navigation server may determine, based on the correspondence between the camera number stored in the navigation server and the road layer, road layer information corresponding to the camera number, and may execute the step shown in S1107.
In another implementation, the traffic platform server 303 may determine the road layer information based on the correspondence between the camera number stored in the traffic platform server 303 and the road layer, and when the navigation server 302 receives the road layer information sent by the traffic platform server 303, the step shown in S1107 may be executed subsequently.
S1107, the navigation server 302 transmits the road layer information to the terminal 200.
Suitably, the terminal 200 receives the road layer information sent by the navigation server 302, and displays the road layer information in the navigation software.
Based on this, in the scene on multilayer road surface, traffic platform server and navigation server can realize the accurate location of road surface layer according to the corresponding relation of camera and road surface layer, and then provide more accurate navigation route for the user, and can make full use of equipment such as navigation server or traffic platform server have current function, need not to set up new server, reduce implementation cost.
On the basis of the embodiment corresponding to fig. 6 or the embodiment corresponding to fig. 11, in a possible manner, if the current scene is in a thunderstorm day, a haze day, or the like, the weather is bad, or the visibility is low, a situation that the license plate information obtained by recognition is inaccurate may occur based on the license plate photograph taken by the camera.
For example, in rainy days, a license plate photo shot by a camera may be blurred due to weather reasons, and then the camera cannot recognize license plate information in the shot license plate photo, or the camera recognizes that the license plate information in the shot license plate photo is wrong. Therefore, when the license plate information shot by the camera is inaccurate, the error can be corrected in the following way.
In one implementation, when the vehicle is in a severe weather scene, the license plate information obtained by shooting with the camera may be inaccurate, so that the navigation software can correct the error based on the weight of the road surface layer. For example, in a scene with bad weather, since the license plate information captured by the camera may be inaccurate, the navigation software may correct the error based on the road layer determined by the camera and the road layer indicated by the GPS positioning information, and the weights of the two road layers.
For example, after the camera is used to locate the road layer where the vehicle is located, the navigation server may set a lower weight for the road layer determined by the camera; when the GPS positioning information of the terminal indicates that the vehicle in which the terminal is located on another road layer, the navigation server may set a higher weight for the road layer indicated by the GPS positioning information. After the navigation software receives the road layer determined based on the camera, the road layer indicated by the GPS positioning information is received, and because the weight of the road layer determined by the GPS positioning information is higher than that of the road layer determined by the camera, the navigation software can correct the road layer information determined based on the camera with reference to the road layer information indicated by the GPS positioning information, and correct the road layer information indicated by the GPS positioning information with reference to the road layer information indicated by the GPS positioning information.
Based on the method, the navigation software can obtain more accurate road surface layer information according to the weight in different scenes, and then the navigation software can provide more accurate navigation routes for users based on the accurate road surface layer information.
In another implementation, when the vehicle is in a scene with bad weather, because the license plate information shot by the camera may be inaccurate, error correction can be performed based on the distance between the road surface layers. For example, in a scene with bad weather, since the license plate information captured by the camera may be inaccurate, the navigation software may correct the error by using the road surface layer determined based on the camera and the road surface layer indicated by the GPS positioning information, the distance between the two road surface layers.
For example, after a road surface layer where the terminal is located, for example, a road surface layer (i) is located by using the camera, when the GPS location information of the terminal indicates that the vehicle where the terminal is located in another road surface layer, for example, a road surface layer (ii), the navigation software may determine whether the road surface layer (i) needs to be updated based on the distance between the road surface layer (i) and the road surface layer (ii). For example, when the navigation software determines that the distance between the first road surface layer and the second road surface layer exceeds a certain distance threshold, the first road surface layer can be determined to be inaccurate, and the navigation software can correct the first road surface layer by taking the second road surface layer indicated by the GPS positioning information as the reference; when the distance between the first road surface layer and the second road surface layer does not exceed a certain distance threshold, the first road surface layer can be determined to be accurate, and the navigation software can not correct the original first road surface layer.
Based on the above, in different scenes, the navigation software can obtain more accurate road surface layer information according to the road surface layers determined by different devices and the distance between the road surface layers, and then the navigation software can provide more accurate navigation routes for users based on the accurate road surface layer information.
In another implementation, when the vehicle is in a scene with bad weather, the navigation software may correct the error by using the road surface layer based on the user input because the license plate information captured by the camera may be inaccurate.
For example, when a user drives a vehicle to run in a road surface layer (i), but the navigation software displays that the road surface layer determined based on the camera is the road surface layer (ii), the user can correct the error of the road surface layer determined based on the camera by changing the road surface layer information in the navigation software.
Based on the method, in different scenes, the navigation software can obtain a relatively accurate road surface layer according to user input, and then the navigation software can provide a more accurate navigation route for the user based on the accurate road surface layer information. It should be understood that the interface diagram provided in the embodiments of the present application is only an example, and is not a limitation to the embodiments of the present application.
The method provided by the embodiment of the present application is described above with reference to fig. 6 to fig. 11, and the apparatus provided by the embodiment of the present application for performing the method is described below.
Fig. 12 is a schematic structural diagram of a positioning apparatus 120 according to an embodiment of the present disclosure, and as shown in fig. 12, the positioning apparatus 120 may be used in a communication device, a circuit, a hardware component, or a chip, and the positioning apparatus includes: a processing unit 1201 and a communication unit 1202. The processing unit 1201 is configured to support the positioning apparatus to perform the information processing step; the communication unit 1202 is used to support the step of the positioning apparatus performing data transmission or reception. The positioning device 120 may be a positioning system, a terminal device, or a first server in the embodiment of the present application.
Specifically, when the positioning device 120 is a positioning system, the embodiment of the present application provides a positioning device, which is applied to the positioning system, and the positioning system includes: terminal equipment and first server, the device includes: a communication unit 1202, configured to send an identifier, a start position, and a destination of a target object that needs to be navigated to a first server; a communication unit 1202, further configured to send a first navigation route to the terminal device according to the start position and the destination; the communication unit 1202 is further configured to report the location information of the terminal device to the first server in the process that the terminal device travels according to the first navigation route; when the position information reflects that the terminal device is about to enter the intersection, the processing unit 1201 is configured to obtain a target lane where the terminal device is located; the system comprises a road junction, a plurality of cameras, a plurality of image acquisition devices and a plurality of image processing devices, wherein the road junction is provided with the plurality of cameras which are used for shooting objects in different lanes in the road junction; the target lane is determined by the first server based on the content obtained by the camera, or the target lane is determined by the first server according to the information received from the second server; a communication unit 1202 further configured to transmit instruction information for instructing a target lane to the terminal device; and the processing unit 1201 is further configured to prompt the target lane where the user is located according to the indication information.
In a possible implementation manner, the processing unit 1201 is specifically configured to obtain a plurality of first association relationships based on a plurality of cameras shooting objects in a plurality of lanes at an intersection, where any one of the first association relationships includes an image and an identifier of a camera shooting the image; the processing unit 1201 is further specifically configured to, when the identifier of the target object is identified in the multiple images, determine, by the first server, a target camera corresponding to the target image including the identifier of the target object; the processing unit 1201 is further specifically configured to determine a target lane where the target camera is located according to the second association relationship; the second association relationship includes a correspondence relationship between the camera and the lane.
In a possible implementation manner, the communication unit 1202 is specifically configured to send an inquiry request to the second server, where the inquiry request includes an identifier of the target object and any one of the following: position information of the target object or an identification of the intersection; the communication unit 1202 is further specifically configured to receive an identification of the target lane from the second server.
In a possible implementation manner, the communication unit 1202 is specifically configured to send, to the second server, an inquiry request, where the inquiry request includes an identifier of the target object and any one of the following: position information of the target object or an identification of the intersection; the communication unit 1202 is further specifically configured to receive an identifier of a target camera from the second server, where the target camera is a camera that captures a target object; the communication unit 1202 is specifically configured to determine a target lane where the target camera is located according to the second association relationship; the second association relationship includes a correspondence relationship between the camera and the lane.
In one possible implementation, the communication unit 1202 is further configured to transmit a second navigation route to the terminal device according to the target lane and the destination when the target lane is different from the lane indicated in the first navigation route.
In a possible implementation manner, when the first server receives the location information from the terminal device within a first time period, the processing unit 1201 is specifically configured to continuously navigate the terminal device according to the second navigation route within the first time period; when the first server receives the location information from the terminal device after the first time period, the processing unit 1201 is further specifically configured to navigate the terminal device according to the location information of the terminal device received after the first time period.
In a possible implementation, the processing unit 1201 is further configured to set a first weight for a lane indicated in the first navigation route, and set a second weight for the target lane according to the environment information; when the environment information shows that the environment is not beneficial to image recognition, the first weight is larger than the second weight; when the environment information indicates that the environment does not affect the image recognition, the first weight is smaller than the second weight; when the target lane is different from the lane indicated in the first navigation route, the processing unit 1201 is further configured to transmit the second navigation route to the terminal device according to the target lane and the lane with the large weight in the lanes indicated in the first navigation route, and the destination.
In a possible implementation manner, when the target lane is different from the lane indicated in the first navigation route, and the distance between the target lane and the lane indicated in the first navigation route is greater than the distance threshold, the processing unit 1201 is further configured to continuously navigate the terminal device according to the second navigation route.
In one possible implementation manner, the identifier of the target object is a license plate number, and the terminal device is a mobile phone or a vehicle.
Specifically, when the positioning device 120 is a first server, the embodiment of the present application provides a positioning device, which includes: a communication unit 1202, configured to receive an identifier, a start position, and a destination of a target object that needs to be navigated from a terminal device; a communication unit 1202, further configured to send a first navigation route to the terminal device according to the start position and the destination; a communication unit 1202, further configured to receive position information of the terminal device during traveling of the first navigation route; when the position information reflects that the terminal device is about to enter the intersection, the processing unit 1201 is configured to obtain a target lane where the terminal device is located; the system comprises a road junction, a plurality of cameras, a plurality of image acquisition devices and a plurality of image processing devices, wherein the road junction is provided with the plurality of cameras which are used for shooting objects in different lanes in the road junction; the target lane is determined by the first server based on the content obtained by the camera, or the target lane is determined by the first server according to the information received from the second server; a communication unit 1202, further configured to send instruction information for instructing the target lane to the terminal device.
In a possible implementation manner, the processing unit 1201 is specifically configured to obtain a plurality of first association relationships based on a plurality of cameras shooting objects in a plurality of lanes at an intersection, where any one of the first association relationships includes an image and an identifier of a camera shooting the image; when the first server identifies the identifier of the target object in the multiple images, the processing unit 1201 is further specifically configured to determine a target camera corresponding to the target image that includes the identifier of the target object; the processing unit 1201 is further specifically configured to determine, according to the second association relationship, a target lane where the target camera is located; the second association relationship includes a correspondence relationship between the camera and the lane.
In a possible implementation manner, the communication unit 1202 is specifically configured to send an inquiry request to the second server, where the inquiry request includes an identifier of the target object and any one of the following: position information of the target object or an identification of the intersection; the communication unit 1202 is specifically configured to receive an identification of a target lane from the second server.
In a possible implementation manner, the communication unit 1202 is specifically configured to send an inquiry request to the second server, where the inquiry request includes an identifier of the target object and any one of the following: position information of the target object or an identification of the intersection; the communication unit 1202 is further specifically configured to receive an identifier of a target camera from the second server, where the target camera is a camera that captures a target object; the processing unit 1201 is specifically configured to determine a target lane where the target camera is located according to the second association relationship; the second association relationship includes a correspondence relationship between the camera and the lane.
In one possible implementation, the communication unit 1202 is further configured to transmit a second navigation route to the terminal device according to the target lane and the destination when the target lane is different from the lane indicated in the first navigation route.
In a possible implementation manner, when the first server receives the location information from the terminal device within a first time period, the processing unit 1201 is specifically configured to continuously navigate the terminal device according to the second navigation route within the first time period; when the first server receives the location information from the terminal device after the first time period, the processing unit 1201 is further specifically configured to navigate the terminal device according to the location information of the terminal device received after the first time period.
In a possible implementation, the processing unit 1201 is further configured to set a first weight for a lane indicated in the first navigation route, and set a second weight for the target lane according to the environment information; when the environment information indicates that the environment is not beneficial to image recognition, the first weight is larger than the second weight; when the environment information indicates that the environment does not affect the image recognition, the first weight is smaller than the second weight; when the target lane is different from the lane indicated in the first navigation route, the communication unit 1202 is further configured to transmit the second navigation route to the terminal device according to the target lane and a lane with a large weight among the lanes indicated in the first navigation route, and the destination.
In a possible implementation manner, when the target lane is different from the lane indicated in the first navigation route, and the distance between the target lane and the lane indicated in the first navigation route is greater than the distance threshold, the processing unit 1201 is further configured to continuously navigate the terminal device according to the second navigation route.
In one possible implementation manner, the identifier of the target object is a license plate number, and the terminal device is a mobile phone or a vehicle.
Specifically, when the positioning apparatus 120 is a terminal device, the embodiment of the present application provides a positioning apparatus, which includes: a communication unit 1202 for sending an identification, a start position, and a destination of a target object to be navigated to a first server; a communication unit 1202, further configured to receive a first navigation route from a first server; the first navigation route is related to a starting position and a destination; the communication unit 1202 is further configured to report the location information of the terminal device to the first server in the process that the terminal device travels according to the first navigation route; when the position information reflects that the terminal device is about to drive into the intersection, the communication unit 1202 is further configured to send a prompt message to the first server; the prompt information is used for prompting that the terminal equipment is about to drive into the intersection; a communication unit 1202 further configured to receive indication information indicating a target lane from the first server; and the processing unit 1201 is used for prompting the user to be in the target lane according to the indication information.
It is understood that the positioning apparatus 120 of the above aspects has the function of implementing the corresponding steps executed by the positioning system, the first server or the terminal device in the above method.
In a possible embodiment, the positioning device 120 may further include: a storage unit 1203. The processing unit 1201 and the storage unit 1203 are connected by a communication line.
The storage unit 1203 may include one or more memories, which may be one or more devices, circuits or other components for storing programs or data.
The storage unit 1203 may be independent and connected to the processing unit 1201 provided in the positioning apparatus through a communication line. The storage unit 1203 may also be integrated with the processing unit 1201.
The communication unit 1202 may be an input or output interface, pin or circuit, etc. For example, the storage unit 1203 may store computer-executable instructions of a method of a radar or target device to cause the processing unit 1201 to perform the method of the radar or target device in the above-described embodiment. The storage unit 1203 may be a register, a cache, a RAM, or the like, and the storage unit 1203 may be integrated with the processing unit 1201. The storage unit 1203 may be a ROM or other type of static storage device that may store static information and instructions, and the storage unit 1203 may be separate from the processing unit 1201.
Fig. 13 is a schematic diagram of a hardware structure of a control device according to an embodiment of the present disclosure, and as shown in fig. 13, the control device includes a processor 1301, a communication line 1304, and at least one communication interface (an exemplary case of the communication interface 1303 in fig. 13 is described as an example).
The processor 1301 may be a general processing unit (CPU), a microprocessor, an application-specific integrated circuit (ASIC), or one or more integrated circuits configured to control the execution of programs according to the present disclosure.
The communication lines 1304 may include circuitry to communicate information between the above-described components.
Possibly, the control device may also comprise a memory 1302.
The memory 1302 may be, but is not limited to, a read-only memory (ROM) or other type of static storage device that can store static information and instructions, a Random Access Memory (RAM) or other type of dynamic storage device that can store information and instructions, an electrically erasable programmable read-only memory (EEPROM), a compact disk read-only memory (CD-ROM) or other optical disk storage, optical disk storage (including compact disk, laser disk, optical disk, digital versatile disk, blu-ray disk, etc.), magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory may be separate and coupled to the processor via communication line 1304. The memory may also be integral to the processor.
The memory 1302 is used for storing computer-executable instructions for executing the present invention, and is controlled by the processor 1301 to execute the instructions. The processor 1301 is configured to execute the computer executable instructions stored in the memory 1302, so as to implement the positioning method provided in the embodiment of the present application.
Possibly, the computer executed instructions in the embodiments of the present application may also be referred to as application program codes, which are not specifically limited in the embodiments of the present application.
In particular implementations, processor 1301 may include one or more CPUs, such as CPU0 and CPU1 in fig. 13, as one embodiment.
In particular implementations, for one embodiment, the control device may include multiple processors, such as processor 1301 and processor 1305 in fig. 13. Each of these processors may be a single-core (single-CPU) processor or a multi-core (multi-CPU) processor. A processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (e.g., computer program instructions).
Exemplarily, fig. 14 is a schematic structural diagram of a chip provided in an embodiment of the present application. Chip 140 includes one or more (including two) processors 1410 and a communication interface 1430.
In some embodiments, memory 1440 stores the following elements: an executable module or a data structure, or a subset thereof, or an expanded set thereof.
In an embodiment of the present application, the memory 1440 may include a read-only memory and a random access memory, and provides instructions and data to the processor 1410. A portion of the memory 1440 may also include non-volatile random access memory (NVRAM).
In the illustrated embodiment, memory 1440, communication interface 1430, and memory 1440 are coupled together via bus system 1420. The bus system 1420 may include a power bus, a control bus, a status signal bus, and the like, in addition to the data bus. For ease of description, the various buses are labeled in FIG. 14 as bus system 1420.
The method described in the embodiments of the present application may be applied to the processor 1410, or implemented by the processor 1410. Processor 1410 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 1410. The processor 1410 may be a general-purpose processor (e.g., a microprocessor or a conventional processor), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an FPGA (field-programmable gate array) or other programmable logic device, discrete gate, transistor logic device or discrete hardware component, and the processor 1410 may implement or execute the methods, steps and logic blocks disclosed in the embodiments of the present invention.
The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in a storage medium mature in the field, such as a random access memory, a read only memory, a programmable read only memory, or a charged erasable programmable memory (EEPROM). The storage medium is located in the memory 1440, and the processor 1410 reads the information in the memory 1440 and performs the steps of the above method in combination with the hardware.
In the above embodiments, the instructions stored by the memory for execution by the processor may be implemented in the form of a computer program product. The computer program product may be written in the memory in advance, or may be downloaded in the form of software and installed in the memory.
The computer program product includes one or more computer instructions. The procedures or functions according to the embodiments of the present application are all or partially generated when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. Computer instructions may be stored in, or transmitted from, a computer-readable storage medium to another computer-readable storage medium, e.g., from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optics, Digital Subscriber Line (DSL), or wireless (e.g., infrared, wireless, microwave, etc.), the computer-readable storage medium may be any available medium that a computer can store or a data storage device including one or more available media integrated servers, data centers, etc., the available media may include, for example, magnetic media (e.g., floppy disks, hard disks, or magnetic tape), optical media (e.g., digital versatile disks, DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), and the like.
The embodiment of the application also provides a computer readable storage medium. The methods described in the above embodiments may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. Computer-readable media may include computer storage media and communication media, and may include any medium that can communicate a computer program from one place to another. A storage medium may be any target medium that can be accessed by a computer.
As one possible design, the computer-readable medium may include a compact disk read-only memory (CD-ROM), RAM, ROM, EEPROM, or other optical disk storage; the computer readable medium may include a disk memory or other disk storage device. Also, any connecting line may also be properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes Compact Disc (CD), laser disc, optical disc, Digital Versatile Disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers.
Combinations of the above should also be included within the scope of computer-readable media. The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily think of the changes or substitutions within the technical scope of the present invention, and shall cover the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (20)
1. A positioning method is applied to a positioning system, and the positioning system comprises: terminal equipment and first server, the method includes:
the terminal equipment sends an identification, a starting position and a destination of a target object needing navigation to the first server;
the first server sends a first navigation route to the terminal equipment according to the starting position and the destination;
in the process that the terminal equipment runs according to a first navigation route, the terminal equipment reports the position information of the terminal equipment to the first server;
when the position information reflects that the terminal equipment is about to drive into the intersection, the first server acquires a target lane where the terminal equipment is located; the intersection is provided with a plurality of cameras, and the cameras are used for shooting objects in different lanes in the intersection; the target lane is determined by the first server based on the content obtained by the camera, or the target lane is determined by the first server according to the information received from the second server;
the first server sends indication information used for indicating the target lane to the terminal equipment;
the terminal equipment prompts the target lane where the user is located according to the indication information;
the first server acquires a target lane where the terminal device is located, and the method comprises the following steps:
the first server obtains a plurality of first association relations based on the plurality of cameras shooting objects in a plurality of lanes of the intersection, wherein any one of the first association relations comprises an image and an identifier of the camera shooting the image;
when the first server identifies the identification of the target object in the plurality of images, the first server determines a target camera corresponding to a target image comprising the identification of the target object;
the first server determines a target lane where the target camera is located according to the second association relation; the second incidence relation comprises the corresponding relation between the camera and the lane.
2. The method according to claim 1, wherein the acquiring, by the first server, a target lane in which the terminal device is located includes:
the first server sending a query request to the second server, the query request including an identification of the target object and any of: position information of the target object or an identification of the intersection;
the first server receives an identification of a target lane from the second server.
3. The method according to claim 1, wherein the acquiring, by the first server, a target lane in which the terminal device is located includes:
the first server sending a query request to the second server, the query request including an identification of the target object and any of: position information of the target object or an identification of the intersection;
the first server receives an identification of a target camera from the second server, wherein the target camera is a camera for shooting the target object;
the first server determines a target lane where the target camera is located according to a second association relation; the second incidence relation comprises the corresponding relation between the camera and the lane.
4. The method according to any one of claims 1-3, further comprising:
when the target lane is different from the lane indicated in the first navigation route, the first server transmits a second navigation route to the terminal device according to the target lane and the destination.
5. The method according to any one of claim 4, wherein after the first server sends a second navigation route to the terminal device according to the target lane and the destination, the method further comprises:
when the first server receives the position information from the terminal equipment in a first time period, the first server continuously navigates the terminal equipment according to the second navigation route in the first time period;
and when the first server receives the position information from the terminal equipment after the first time period, the first server navigates the terminal equipment according to the position information of the terminal equipment received after the first time period.
6. The method according to any one of claims 1-3, further comprising:
the first server sets a first weight for a lane indicated in the first navigation route and sets a second weight for the target lane according to environmental information; wherein the first weight is greater than the second weight when the environment information indicates that the environment is not favorable for image recognition; when the environment information indicates that the environment does not affect image recognition, the first weight is smaller than the second weight;
when the target lane is different from the lane indicated in the first navigation route, the first server transmits a second navigation route to the terminal device according to the target lane, the lane with the larger weight in the lanes indicated in the first navigation route, and the destination.
7. The method according to any one of claims 1-3, further comprising:
when the target lane is different from the lane indicated in the first navigation route and the distance between the target lane and the lane indicated in the first navigation route is greater than a distance threshold, the first server continuously navigates the terminal device according to the first navigation route.
8. The method according to any one of claims 1 to 3, wherein the identification of the target object is a license plate number, and the terminal device is a mobile phone or a vehicle.
9. A method of positioning, the method comprising:
the method comprises the steps that a first server receives an identification, a starting position and a destination of a target object needing navigation from a terminal device;
the first server sends a first navigation route to the terminal equipment according to the starting position and the destination;
the first server receives position information of the terminal equipment in the process of driving a first navigation route;
when the position information reflects that the terminal equipment is about to drive into the intersection, the first server acquires a target lane where the terminal equipment is located; the intersection is provided with a plurality of cameras, and the cameras are used for shooting objects in different lanes in the intersection; the target lane is determined by the first server based on content obtained by camera shooting, or the target lane is determined by the first server according to information received from a second server;
the first server sends indication information used for indicating the target lane to the terminal equipment;
the first server acquires a target lane where the terminal device is located, and the method comprises the following steps:
the first server obtains a plurality of first association relations based on the plurality of cameras shooting objects in a plurality of lanes of the intersection, wherein any one of the first association relations comprises an image and an identifier of the camera shooting the image;
when the first server identifies the identification of the target object in the plurality of images, the first server determines a target camera corresponding to a target image comprising the identification of the target object;
the first server determines a target lane where the target camera is located according to a second association relation; the second incidence relation comprises the corresponding relation between the camera and the lane.
10. The method of claim 9, wherein the obtaining, by the first server, the target lane in which the terminal device is located comprises:
the first server sending a query request to the second server, the query request including an identification of the target object and any of: position information of the target object or an identification of the intersection;
the first server receives an identification of a target lane from the second server.
11. The method of claim 9, wherein the obtaining, by the first server, the target lane in which the terminal device is located comprises:
the first server sending a query request to the second server, the query request including an identification of the target object and any of: position information of the target object or an identification of the intersection;
the first server receives an identification of a target camera from the second server, wherein the target camera is a camera for shooting the target object;
the first server determines a target lane where the target camera is located according to the second association relation; the second incidence relation comprises the corresponding relation between the camera and the lane.
12. The method according to any one of claims 9-11, further comprising:
when the target lane is different from the lane indicated in the first navigation route, the first server sends a second navigation route to the terminal device according to the target lane and the destination.
13. The method of claim 12, wherein after the first server sends a second navigation route to the terminal device according to the target lane and the destination, further comprising:
when the first server receives the position information from the terminal equipment in a first time period, the first server continuously navigates the terminal equipment according to the second navigation route in the first time period;
and when the first server receives the position information from the terminal equipment after the first time period, the first server navigates the terminal equipment according to the position information of the terminal equipment received after the first time period.
14. The method according to any one of claims 9-11, further comprising:
the first server sets a first weight for a lane indicated in the first navigation route and sets a second weight for the target lane according to environmental information; wherein the first weight is greater than the second weight when the environment information indicates that the environment is not favorable for image recognition; when the environment information indicates that the environment does not affect image recognition, the first weight is smaller than the second weight;
when the target lane is different from the lane indicated in the first navigation route, the first server transmits a second navigation route to the terminal device according to the target lane, the lane with the larger weight in the lanes indicated in the first navigation route, and the destination.
15. The method according to any one of claims 9-11, further comprising:
when the target lane is different from a lane indicated in the first navigation route and a distance between the target lane and the lane indicated in the first navigation route is greater than a distance threshold, the first server continuously navigates the terminal device according to the first navigation route.
16. The method according to any one of claims 9 to 11, wherein the identification of the target object is a license plate number and the terminal device is a mobile phone or a vehicle.
17. A method of positioning, the method comprising:
the terminal equipment sends an identification, an initial position and a destination of a target object needing navigation to a first server;
the terminal equipment receives a first navigation route from the first server; the first navigation route is related to the starting location and the destination;
in the process that the terminal equipment runs according to a first navigation route, the terminal equipment reports the position information of the terminal equipment to the first server;
when the position information reflects that the terminal equipment is about to drive into the intersection, the terminal equipment sends prompt information to the first server; the prompt information is used for prompting that the terminal equipment is about to drive into the intersection; the system comprises a road junction, a plurality of cameras and a control system, wherein the road junction is provided with the plurality of cameras, and the plurality of cameras are used for shooting objects in different lanes in the road junction;
the terminal equipment receives indication information used for indicating a target lane from the first server;
the terminal equipment prompts a user to be in the target lane according to the indication information;
the target lane is the target lane where the target camera is located, which is determined by the first server according to the second incidence relation; the second incidence relation comprises a corresponding relation between the camera and the lane;
the target camera is a target camera corresponding to a target image which is determined by the first server when the identification of the target object is identified in the plurality of images and comprises the identification of the target object after the first server obtains a plurality of first association relations comprising the images and the identification of the camera shooting the images based on the plurality of cameras shooting the objects in the plurality of lanes of the intersection.
18. An electronic device comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor, when executing the computer program, causes the electronic device to perform the method of any of claims 1 to 8, or the method of any of claims 9 to 16, or the method of claim 17.
19. A computer-readable storage medium, in which a computer program is stored which, when executed by a processor, causes a computer to perform the method of any one of claims 1 to 8, or the method of any one of claims 9 to 16, or the method of claim 17.
20. A computer program product, comprising a computer program which, when executed, causes a computer to perform the method of any of claims 1 to 8, or the method of any of claims 9 to 16, or the method of claim 17.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110342456.6A CN113269976B (en) | 2021-03-30 | 2021-03-30 | Positioning method and device |
PCT/CN2022/075723 WO2022206179A1 (en) | 2021-03-30 | 2022-02-09 | Positioning method and apparatus |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110342456.6A CN113269976B (en) | 2021-03-30 | 2021-03-30 | Positioning method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113269976A CN113269976A (en) | 2021-08-17 |
CN113269976B true CN113269976B (en) | 2022-08-23 |
Family
ID=77228276
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110342456.6A Active CN113269976B (en) | 2021-03-30 | 2021-03-30 | Positioning method and device |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113269976B (en) |
WO (1) | WO2022206179A1 (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113269976B (en) * | 2021-03-30 | 2022-08-23 | 荣耀终端有限公司 | Positioning method and device |
CN113660611B (en) * | 2021-08-18 | 2023-04-18 | 荣耀终端有限公司 | Positioning method and device |
CN113868279A (en) * | 2021-09-30 | 2021-12-31 | 京东城市(北京)数字科技有限公司 | Track data processing method, device and system |
CN114509068B (en) * | 2022-01-04 | 2024-07-05 | 海信集团控股股份有限公司 | Method and device for judging positions of vehicles on multilayer roads |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006023278A (en) * | 2004-06-07 | 2006-01-26 | Nissan Motor Co Ltd | On-vehicle navigation system, and lane position prediction device used for the same |
CN104422462A (en) * | 2013-09-06 | 2015-03-18 | 上海博泰悦臻电子设备制造有限公司 | Vehicle navigation method and vehicle navigation device |
CN104880193A (en) * | 2015-05-06 | 2015-09-02 | 石立公 | Lane-level navigation system and lane-level navigation method thereof |
CN104821089A (en) * | 2015-05-18 | 2015-08-05 | 深圳市骄冠科技实业有限公司 | Divided lane vehicle positioning system based on radio frequency license plate with function of communication |
CN105588576B (en) * | 2015-12-15 | 2019-02-05 | 招商局重庆交通科研设计院有限公司 | A kind of lane grade navigation methods and systems |
CN108303103B (en) * | 2017-02-07 | 2020-02-07 | 腾讯科技(深圳)有限公司 | Method and device for determining target lane |
CN107192396A (en) * | 2017-02-13 | 2017-09-22 | 问众智能信息科技(北京)有限公司 | Automobile accurate navigation method and device |
CN109141464B (en) * | 2018-09-30 | 2020-12-29 | 百度在线网络技术(北京)有限公司 | Navigation lane change prompting method and device |
CN110375764A (en) * | 2019-07-16 | 2019-10-25 | 中国第一汽车股份有限公司 | Lane change reminding method, system, vehicle and storage medium |
CN110853360A (en) * | 2019-08-05 | 2020-02-28 | 中国第一汽车股份有限公司 | Vehicle positioning system and method |
CN110488825B (en) * | 2019-08-19 | 2022-03-18 | 中国第一汽车股份有限公司 | Automatic driving ramp port identification method and vehicle |
CN113269976B (en) * | 2021-03-30 | 2022-08-23 | 荣耀终端有限公司 | Positioning method and device |
-
2021
- 2021-03-30 CN CN202110342456.6A patent/CN113269976B/en active Active
-
2022
- 2022-02-09 WO PCT/CN2022/075723 patent/WO2022206179A1/en active Application Filing
Also Published As
Publication number | Publication date |
---|---|
WO2022206179A1 (en) | 2022-10-06 |
CN113269976A (en) | 2021-08-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113269976B (en) | Positioning method and device | |
JP3941312B2 (en) | Road traffic system and information processing method thereof | |
CN109817022B (en) | Method, terminal, automobile and system for acquiring position of target object | |
EP4119399A1 (en) | Driving data collection method and apparatus | |
CN106846897B (en) | Parking stall method for managing resource and device | |
CN103376110A (en) | Picture navigation method and corresponding picture navigation equipment and picture navigation system | |
US20140002652A1 (en) | System and method for in vehicle lane determination using cmos image sensor | |
CN104021695B (en) | The air navigation aid of onboard navigation system, real-time road and querying method | |
CN103914991A (en) | Vehicle position sharing method | |
CN109814137B (en) | Positioning method, positioning device and computing equipment | |
US11963066B2 (en) | Method for indicating parking position and vehicle-mounted device | |
CN104380290A (en) | Information processing device, information processing method, and program | |
CN110972085A (en) | Information interaction method, device, storage medium, equipment and system | |
US11645913B2 (en) | System and method for location data fusion and filtering | |
CN105387854A (en) | Navigation system with content delivery mechanism and method of operation thereof | |
KR101280313B1 (en) | Smart bus information system | |
CN113077627B (en) | Method and device for detecting overrun source of vehicle and computer storage medium | |
CN109767645A (en) | A kind of parking planning householder method and system based on AR glasses | |
KR100957605B1 (en) | System for providing road image | |
JP5053135B2 (en) | Traffic information display system, traffic information display server, traffic information display method, and computer program | |
CN106781470B (en) | Method and device for processing running speed of urban road | |
JP4685286B2 (en) | Information update processing device | |
CN113345251A (en) | Vehicle reverse running detection method and related device | |
CN113673770B (en) | Method, device, equipment and storage medium for determining position of mobile super point | |
JP2010164402A (en) | Information collecting device, mobile terminal device, information center, and navigation system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |