Show EOL distros:
Package Summary
visp_auto_tracker wraps model-based trackers provided by ViSP visual servoing library into a ROS package. The tracked object should have a QRcode of Flash code pattern. Based on the pattern, the object is automaticaly detected. The detection allows then to initialise the model-based trackers. When lost of tracking achieves a new detection is performed that will be used to re-initialize the tracker. This computer vision algorithm computes the pose (i.e. position and orientation) of an object in an image. It is fast enough to allow object online tracking using a camera.
- Author: Filip Novotny, Fabien Spindler/Fabien.Spindler@inria.fr
- License: BSD
- Source: git https://github.com/laas/vision_visp.git (branch: master)
Package Summary
visp_auto_tracker wraps model-based trackers provided by ViSP visual servoing library into a ROS package. The tracked object should have a QRcode of Flash code pattern. Based on the pattern, the object is automaticaly detected. The detection allows then to initialise the model-based trackers. When lost of tracking achieves a new detection is performed that will be used to re-initialize the tracker. This computer vision algorithm computes the pose (i.e. position and orientation) of an object in an image. It is fast enough to allow object online tracking using a camera.
- Author: Filip Novotny, Fabien Spindler/Fabien.Spindler@inria.fr
- License: BSD
- Source: git https://github.com/laas/vision_visp.git (branch: master)
Package Summary
Online automated pattern-based object tracker relying on visual servoing. visp_auto_tracker wraps model-based trackers provided by ViSP visual servoing library into a ROS package. The tracked object should have a QRcode of Flash code pattern. Based on the pattern, the object is automaticaly detected. The detection allows then to initialise the model-based trackers. When lost of tracking achieves a new detection is performed that will be used to re-initialize the tracker. This computer vision algorithm computes the pose (i.e. position and orientation) of an object in an image. It is fast enough to allow object online tracking using a camera.
- Maintainer status: maintained
- Maintainer: Fabien Spindler <Fabien.Spindler AT inria DOT fr>
- Author: Filip Novotny
- License: GPLv2
- Source: git https://github.com/lagadic/vision_visp.git (branch: groovy)
Package Summary
Online automated pattern-based object tracker relying on visual servoing. visp_auto_tracker wraps model-based trackers provided by ViSP visual servoing library into a ROS package. The tracked object should have a QRcode of Flash code pattern. Based on the pattern, the object is automaticaly detected. The detection allows then to initialise the model-based trackers. When lost of tracking achieves a new detection is performed that will be used to re-initialize the tracker. This computer vision algorithm computes the pose (i.e. position and orientation) of an object in an image. It is fast enough to allow object online tracking using a camera.
- Maintainer status: maintained
- Maintainer: Fabien Spindler <Fabien.Spindler AT inria DOT fr>
- Author: Filip Novotny
- License: GPLv2
- Source: git https://github.com/lagadic/vision_visp.git (branch: hydro)
Package Summary
Online automated pattern-based object tracker relying on visual servoing. visp_auto_tracker wraps model-based trackers provided by ViSP visual servoing library into a ROS package. The tracked object should have a QRcode of Flash code pattern. Based on the pattern, the object is automaticaly detected. The detection allows then to initialise the model-based trackers. When lost of tracking achieves a new detection is performed that will be used to re-initialize the tracker. This computer vision algorithm computes the pose (i.e. position and orientation) of an object in an image. It is fast enough to allow object online tracking using a camera.
- Maintainer status: maintained
- Maintainer: Fabien Spindler <Fabien.Spindler AT inria DOT fr>
- Author: Filip Novotny
- License: GPLv2
- Source: git https://github.com/lagadic/vision_visp.git (branch: indigo)
Package Summary
Online automated pattern-based object tracker relying on visual servoing. visp_auto_tracker wraps model-based trackers provided by ViSP visual servoing library into a ROS package. The tracked object should have a QRcode of Flash code pattern. Based on the pattern, the object is automaticaly detected. The detection allows then to initialise the model-based trackers. When lost of tracking achieves a new detection is performed that will be used to re-initialize the tracker. This computer vision algorithm computes the pose (i.e. position and orientation) of an object in an image. It is fast enough to allow object online tracking using a camera.
- Maintainer status: maintained
- Maintainer: Fabien Spindler <Fabien.Spindler AT inria DOT fr>
- Author: Filip Novotny
- License: GPLv2
- Source: git https://github.com/lagadic/vision_visp.git (branch: jade)
Package Summary
Online automated pattern-based object tracker relying on visual servoing. visp_auto_tracker wraps model-based trackers provided by ViSP visual servoing library into a ROS package. The tracked object should have a QRcode of Flash code pattern. Based on the pattern, the object is automaticaly detected. The detection allows then to initialise the model-based trackers. When lost of tracking achieves a new detection is performed that will be used to re-initialize the tracker. This computer vision algorithm computes the pose (i.e. position and orientation) of an object in an image. It is fast enough to allow object online tracking using a camera.
- Maintainer status: maintained
- Maintainer: Fabien Spindler <Fabien.Spindler AT inria DOT fr>
- Author: Filip Novotny
- License: GPLv2
- Source: git https://github.com/lagadic/vision_visp.git (branch: kinetic)
Package Summary
Online automated pattern-based object tracker relying on visual servoing. visp_auto_tracker wraps model-based trackers provided by ViSP visual servoing library into a ROS package. The tracked object should have a QRcode of Flash code pattern. Based on the pattern, the object is automaticaly detected. The detection allows then to initialise the model-based trackers. When lost of tracking achieves a new detection is performed that will be used to re-initialize the tracker. This computer vision algorithm computes the pose (i.e. position and orientation) of an object in an image. It is fast enough to allow object online tracking using a camera.
- Maintainer status: maintained
- Maintainer: Fabien Spindler <Fabien.Spindler AT inria DOT fr>
- Author: Filip Novotny
- License: GPLv2
- Source: git https://github.com/lagadic/vision_visp.git (branch: lunar)
Package Summary
Online automated pattern-based object tracker relying on visual servoing. visp_auto_tracker wraps model-based trackers provided by ViSP visual servoing library into a ROS package. The tracked object should have a QRcode, Flash code, or April tag pattern. Based on the pattern, the object is automaticaly detected. The detection allows then to initialise the model-based trackers. When lost of tracking achieves a new detection is performed that will be used to re-initialize the tracker. This computer vision algorithm computes the pose (i.e. position and orientation) of an object in an image. It is fast enough to allow object online tracking using a camera.
- Maintainer status: maintained
- Maintainer: Fabien Spindler <Fabien.Spindler AT inria DOT fr>
- Author: Filip Novotny
- License: GPLv2
- Source: git https://github.com/lagadic/vision_visp.git (branch: melodic)
Package Summary
Online automated pattern-based object tracker relying on visual servoing. visp_auto_tracker wraps model-based trackers provided by ViSP visual servoing library into a ROS package. The tracked object should have a QRcode, Flash code, or April tag pattern. Based on the pattern, the object is automaticaly detected. The detection allows then to initialise the model-based trackers. When lost of tracking achieves a new detection is performed that will be used to re-initialize the tracker. This computer vision algorithm computes the pose (i.e. position and orientation) of an object in an image. It is fast enough to allow object online tracking using a camera.
- Maintainer status: maintained
- Maintainer: Fabien Spindler <Fabien.Spindler AT inria DOT fr>
- Author: Filip Novotny
- License: GPLv2
- Source: git https://github.com/lagadic/vision_visp.git (branch: noetic)
Contents
Overview
This package wraps an automated pattern barcode based tracker using ViSP library. The tracker estimates the pattern position and orientation with respect to the camera. It requires the pattern 3d model and a configuration file.
The algorithm allows first to detect automatically the barcode using one of the following detectors:
- QR-code detection
- flashcode detection
Then from the location of the 4 barcode corners it computes an initial pose using a PnP algorithm. This pose allows to initialize the model based tracker that is dedicated to track the two squares defining the black area arround the barcode. For the tracking we use an hybrid approach that considers moving-edges and keypoint features that are mainly located on the barcode. Finally, the tracker is also able to detect loss of tracking and recover from it entering in a new barcode detection and localization stage.
The package is composed of one node called visp_auto_tracker. This node tries to track the object as fast as possible. The viewer coming with visp_tracker package can be used to monitor the tracking result.
The next video shows how to track a specific pattern textured with a QRcode. ViSP model-based tracker detects when it fails and recover the object position thanks to QRcode detection.
Reference
Calibration Requirements
Currently the visp_auto_tracker package requires calibration information from a camera_info topic. To this end visp_camera_calibration package can be used.
Features
The package purpose is to provide the 3D pose of an object in a sequence of images. The object has to be textured with a pattern on one face. The pattern has to be included into a white box, itself included in a black box.
This is an example of a valid QR-code pattern that can be downloaded here.
This is an example of a valid flash-code pattern that can be downloaded here.
Installation
visp_auto_tracker is part of vision_visp stack.
To install visp_auto_tracker package run
sudo apt-get install ros-$ROS_DISTRO-visp-auto-tracker
Or to install the complete stack run
sudo apt-get install ros-$ROS_DISTRO-vision-visp
Examples
You can run visp_auto_tracker on a pre-recorded bag file that comes with the package, or on a live video from a camera.
Pre-recorded example
To run visp_auto_tracker on a pre-recorded image sequence, just run:
roslaunch launch/tutorial.launch
The pattern used in this example can be downloaded here.
Live video examples
You have a ready-to-use roslaunch file in launch/tracklive_firewire.launch. This works with a firewire (1394) camera. If you have an usb camera (like a webcam) you can use launch/tracklive_usb.launch launch file.
You can launch with the following command line:
roslaunch launch/tracklive_firewire.launch
Config file
visp_auto_tracker centralises most of its parameters inside a configuration file following the boost::program_options default format.
The basic configuration file would look like this:
#set the detector type: "zbar" to detect QR code, "dmtx" to detect flashcode detector-type= zbar #enable recovery mode when the tracker fails ad-hoc-recovery= 1 #point 1 flashcode-coordinates= -0.024 flashcode-coordinates= -0.024 flashcode-coordinates= 0.000 #point 2 flashcode-coordinates= 0.024 flashcode-coordinates= -0.024 flashcode-coordinates= 0.000 #point 3 flashcode-coordinates= 0.024 flashcode-coordinates= 0.024 flashcode-coordinates= 0.000 #point 4 flashcode-coordinates= -0.024 flashcode-coordinates= 0.024 flashcode-coordinates= 0.000 #point 1 inner-coordinates= -0.038 inner-coordinates= -0.038 inner-coordinates= 0.000 #point 2 inner-coordinates= 0.038 inner-coordinates= -0.038 inner-coordinates= 0.000 #point 3 inner-coordinates= 0.038 inner-coordinates= 0.038 inner-coordinates= 0.000 #point 4 inner-coordinates= -0.038 inner-coordinates= 0.038 inner-coordinates= 0.000 #point 1 outer-coordinates= -0.0765 outer-coordinates= -0.0765 outer-coordinates= 0.000 #point 2 outer-coordinates= 0.0765 outer-coordinates= -0.0765 outer-coordinates= 0.000 #point 3 outer-coordinates= 0.0765 outer-coordinates= 0.0765 outer-coordinates= 0.000 #point 4 outer-coordinates= -0.0765 outer-coordinates= 0.0765 outer-coordinates= 0.000
Common parameters
detector-type
The following detectors are supported
detector-type= zbar: uses libzbar to detect QRcodes
detector-type= dmtx: uses libdmtx to detect flashcodes
flashcode-coordinates
3D-coordinates in meters of the box delimiting the pattern (QRcode or flashcode).
inner-coordinates
3D-coordinates in meters of the white box containing the pattern.
outer-coordinates
3D-coordinates in meters of the black box containing the pattern.
ad-hoc-recovery
When set (tracker-type= 1) this parameter activates the tracking lost detection and recovery using flashcode-coordinates, inner-coordinates and outer-coordinates point coordinates.
Tracker states
The tracker is a state machine whose states vary during the tracking process. The process includes tracking, loss and recovery. These are the states used:
- Waiting For Input (id: 0) : Not detecting any pattern, just recieving images
- Detect Flashcode (id: 1) : Pattern detected.
Detect Model (id: 2) : Model successfully initialized (from wrl & xml files).
- Track Model (id: 3) : Tracking model.
- Re Detect Flashcode (id: 4) : Detecting pattern in a small region around where the pattern was last seen.
- Detect Flash code (id: 5) : Detecting pattern in a the whole frame.
Viewer
When you track a model, you probably want a visual feedback. You can get one by connecting rviz to the outputed /object_position topic. visp_auto_tracker does not have a dedicated viewer. It can use the viewer provided with visp_tracker package, specifically visp_tracker/visp_tracker_viewer node.
Without connecting another node, you can also open a debug graphical output directly from the visp_auto_tracker node by setting the debug_display parameter.
The following figure shows the debug output (left) next to the external visp_tracker/viewer (right) in the case of the hybrid model-based tracker with QR-code initialisation:
Nodes
visp_auto_tracker
Subscribes to a camera and publishes pose.Subscribed Topics
image_raw (sensor_msgs/Image)- The image topic. Should be remapped to the name of the real image topic.
- The camera parameters.
Published Topics
object_position (geometry_msgs/PoseStamped)- 3D pose of the model.
- 3D pose of the model. The covariance part is unused
- Status of the automatic tracker. See tracker states for more information.
- Moving edge sites information (stamped). For debugging/monitoring purpose.
- Position and id of the keypoints (stamped). For debugging/monitoring purpose.
Parameters
model_path (string)- path to where the models are stored.
- model name. Name of the cfg, wrl and xml files. If model_path is /path/ and model_name is model then /path/model.wrl, /path/model.xml and /path/model.cfg will be loaded. The content of the cfg file is described in "Config file" section.
- display debug information about tracking
Report a bug
Use GitHub to report a bug or submit an enhancement.