default search action
Andrew I. Comport
Person information
- affiliation: University of Nice Sophia Antipolis, Nice, France
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c40]Arnab Dey, Di Yang, Rohith Agaram, Antitza Dantcheva, Andrew I. Comport, Srinath Sridhar, Jean Martinet:
GHNeRF: Learning Generalizable Human Features with Efficient Neural Radiance Fields. CVPR Workshops 2024: 2812-2821 - [c39]Cheng-You Lu, Peisen Zhou, Angela Xing, Chandradeep Pokhariya, Arnab Dey, Ishaan Nikhil Shah, Rugved Mavidipalli, Dylan Hu, Andrew I. Comport, Kefan Chen, Srinath Sridhar:
DiVa-360: The Dynamic Visual Dataset for Immersive Neural Fields. CVPR 2024: 22466-22476 - [c38]Georgios-Markos Chatziloizos, Andrea Ancora, Andrew I. Comport, Christian Barat:
Low Parameter Neural Networks for In-Car Distracted Driver Detection. ICMLT 2024: 204-208 - [c37]Martin Filliung, Juliette Drupt, Charly Peraud, Claire Dune, Nicolas Boizot, Andrew I. Comport, Cédric Anthierens, Vincent Hugel:
An Augmented Catenary Model for Underwater Tethered Robots. ICRA 2024: 6051-6057 - [i9]Arnab Dey, Di Yang, Rohith Agaram, Antitza Dantcheva, Andrew I. Comport, Srinath Sridhar, Jean Martinet:
GHNeRF: Learning Generalizable Human Features with Efficient Neural Radiance Fields. CoRR abs/2404.06246 (2024) - 2023
- [i8]Houssem Eddine Boulahbal, Adrian Voicila, Andrew I. Comport:
STDepthFormer: Predicting Spatio-temporal Depth from Video with a Self-supervised Transformer Model. CoRR abs/2303.01196 (2023) - [i7]Cheng-You Lu, Peisen Zhou, Angela Xing, Chandradeep Pokhariya, Arnab Dey, Ishaan N. Shah, Rugved Mavidipalli, Dylan Hu, Andrew I. Comport, Kefan Chen, Srinath Sridhar:
DiVA-360: The Dynamic Visuo-Audio Dataset for Immersive Neural Fields. CoRR abs/2307.16897 (2023) - 2022
- [j11]Arnab Dey, Yassine Ahmine, Andrew I. Comport:
Mip-NeRF RGB-D: Depth Assisted Fast Neural Radiance Fields. J. WSCG 30(1-2): 34-43 (2022) - [j10]Houssem-eddine Boulahbal, Adrian Voicila, Andrew I. Comport:
Instance-Aware Multi-Object Self-Supervision for Monocular Depth Prediction. IEEE Robotics Autom. Lett. 7(4): 10962-10968 (2022) - [c36]Houssem Eddine Boulahbal, Adrian Voicila, Andrew I. Comport:
Forecasting of depth and ego-motion with transformers and self-supervision. ICPR 2022: 3706-3713 - [c35]Juliette Drupt, Claire Dune, Andrew I. Comport, Sabine Seillier, Vincent Hugel:
Inertial-measurement-based catenary shape estimation of underwater cables for tethered robots. IROS 2022: 6867-6872 - [i6]Houssem-eddine Boulahbal, Adrian Voicila, Andrew I. Comport:
Instance-aware multi-object self-supervision for monocular depth prediction. CoRR abs/2203.00809 (2022) - [i5]Arnab Dey, Andrew I. Comport:
RGB-D Neural Radiance Fields: Local Sampling for Faster Training. CoRR abs/2203.15587 (2022) - [i4]Arnab Dey, Yassine Ahmine, Andrew I. Comport:
Mip-NeRF RGB-D: Depth Assisted Fast Neural Radiance Fields. CoRR abs/2205.09351 (2022) - [i3]Houssem-eddine Boulahbal, Adrian Voicila, Andrew I. Comport:
Forecasting of depth and ego-motion with transformers and self-supervision. CoRR abs/2206.07435 (2022) - [i2]Yassine Ahmine, Arnab Dey, Andrew I. Comport:
PNeRF: Probabilistic Neural Scene Representations for Uncertain 3D Visual Mapping. CoRR abs/2209.11677 (2022) - 2021
- [c34]Houssem-eddine Boulahbal, Adrian Voicila, Andrew I. Comport:
Are conditional GANs explicitly conditional? BMVC 2021: 201 - [i1]Houssem-eddine Boulahbal, Adrian Voicila, Andrew I. Comport:
Are conditional GANs explicitly conditional? CoRR abs/2106.15011 (2021)
2010 – 2019
- 2019
- [j9]Abderrahmane Kheddar, Máximo A. Roa, Pierre-Brice Wieber, François Chaumette, Fabien Spindler, Giuseppe Oriolo, Leonardo Lanari, Adrien Escande, Kevin Chappellet, Fumio Kanehiro, Patrice Rabaté, Stéphane Caron, Pierre Gergondet, Andrew I. Comport, Arnaud Tanguy, Christian Ott, Bernd Henze, George Mesesan, Johannes Englsberger:
Humanoid Robots in Aircraft Manufacturing: The Airbus Use Cases. IEEE Robotics Autom. Mag. 26(4): 30-45 (2019) - [c33]Howard Mahé, Denis Marraud, Andrew I. Comport:
Real-time RGB-D semantic keyframe SLAM based on image segmentation learning from industrial CAD models. ICAR 2019: 147-154 - [c32]Arnaud Tanguy, Daniele De Simone, Andrew I. Comport, Giuseppe Oriolo, Abderrahmane Kheddar:
Closed-loop MPC with Dense Visual SLAM - Stability through Reactive Stepping. ICRA 2019: 1397-1403 - 2018
- [j8]Fernando I. Ireta Muñoz, Andrew I. Comport:
Point-to-hyperplane ICP: fusing different metric measurements for pose estimation. Adv. Robotics 32(4): 161-175 (2018) - [c31]Howard Mahé, Denis Marraud, Andrew I. Comport:
Semantic-only Visual Odometry based on dense class-level segmentation. ICPR 2018: 1989-1995 - [c30]Arnaud Tanguy, Abderrahmane Kheddar, Andrew I. Comport:
Online eye-robot self-calibration. SIMPAR 2018: 68-73 - 2017
- [c29]Christian Barat, Andrew I. Comport:
Active high dynamic range mapping for dense visual SLAM. IROS 2017: 6514-6519 - [c28]Fernando I. Ireta Muñoz, Andrew I. Comport:
Global Point-to-hyperplane ICP: Local and global pose estimation by fusing color and depth. MFI 2017: 22-27 - 2016
- [c27]Fernando I. Ireta Muñoz, Andrew I. Comport:
Point-to-hyperplane RGB-D pose estimation: Fusing photometric and geometric measurements. IROS 2016: 24-29 - [c26]Fernando I. Ireta Muñoz, Andrew I. Comport:
A proof that fusing measurements using Point-to-hyperplane registration is invariant to relative scale. MFI 2016: 517-522 - [c25]Amaud Tanguy, Pierre Gergondet, Andrew I. Comport, Abderrahmane Kheddar:
Closed-loop RGB-D SLAM multi-contact control for humanoid robots. SII 2016: 51-57 - 2015
- [j7]Maxime Meilland, Andrew I. Comport, Patrick Rives:
Dense Omnidirectional RGB-D Mapping of Large-scale Outdoor Environments for Real-time Localization and Autonomous Navigation. J. Field Robotics 32(4): 474-503 (2015) - 2014
- [j6]Tommi Tykkälä, Andrew I. Comport, Joni-Kristian Kämäräinen, Hannu Hartikainen:
Live RGB-D camera tracking for television production studios. J. Vis. Commun. Image Represent. 25(1): 207-217 (2014) - [c24]Pierre Gergondet, Damien Petit, Maxime Meilland, Abderrahmane Kheddar, Andrew I. Comport, Andrea Cherubini:
Combining 3D SLAM and visual tracking to reach and retrieve objects in daily-life indoor environments. URAI 2014: 600-604 - 2013
- [c23]Maxime Meilland, Tom Drummond, Andrew I. Comport:
A Unified Rolling Shutter and Motion Blur Model for 3D Visual Registration. ICCV 2013: 2016-2023 - [c22]Maxime Meilland, Andrew I. Comport:
Super-resolution 3D tracking and mapping. ICRA 2013: 5717-5723 - [c21]Tommi Tykkala, Andrew I. Comport, Joni-Kristian Kamarainen:
Photorealistic 3D mapping of indoors by RGB-D scanning process. IROS 2013: 1050-1055 - [c20]Maxime Meilland, Andrew I. Comport:
On unifying key-frame and voxel-based dense visual SLAM at large scales. IROS 2013: 3677-3683 - [c19]Maxime Meilland, Christian Barat, Andrew I. Comport:
3D High Dynamic Range dense visual SLAM and its application to real-time object re-lighting. ISMAR 2013: 143-152 - [c18]Tommi Tykkälä, Hannu Hartikainen, Andrew I. Comport, Joni-Kristian Kämäräinen:
RGB-D Tracking and Reconstruction for TV Broadcasts. VISAPP (2) 2013: 247-252 - 2011
- [c17]Maxime Meilland, Andrew I. Comport, Patrick Rives:
Real-time Dense Visual Tracking under Large Lighting Variations. BMVC 2011: 1-11 - [c16]Andrew I. Comport, Maxime Meilland, Patrick Rives:
An asymmetric real-time dense visual localisation and mapping system. ICCV Workshops 2011: 700-703 - [c15]Tommi Tykkala, Cédric Audras, Andrew I. Comport:
Direct Iterative Closest Point for real-time visual odometry. ICCV Workshops 2011: 2050-2056 - [c14]Tommi Tykkala, Andrew I. Comport:
A dense structure model for image based stereo SLAM. ICRA 2011: 1758-1763 - [c13]Tiago Gonçalves, Andrew I. Comport:
Real-time direct tracking of color images in the presence of illumination variation. ICRA 2011: 4417-4422 - [c12]Andrew I. Comport, Robert E. Mahony, Fabien Spindler:
A visual servoing model for generalised cameras: Case study of non-overlapping cameras. ICRA 2011: 5683-5688 - [c11]Maxime Meilland, Andrew Ian Comport, Patrick Rives:
Dense visual mapping of large scale environments for real-time localisation. IROS 2011: 4242-4248 - 2010
- [j5]Andrew I. Comport, Ezio Malis, Patrick Rives:
Real-time Quadrifocal Visual Odometry. Int. J. Robotics Res. 29(2-3): 245-266 (2010) - [c10]Gabriela Gallegos, Maxime Meilland, Patrick Rives, Andrew I. Comport:
Appearance-based SLAM relying on a hybrid laser/omnidirectional sensor. IROS 2010: 3005-3010 - [c9]Maxime Meilland, Andrew I. Comport, Patrick Rives:
A spherical robot-centered representation for urban navigation. IROS 2010: 5196-5201
2000 – 2009
- 2007
- [j4]Andrew I. Comport, Éric Marchand, François Chaumette:
Kinematic sets for real-time robust articulated object tracking. Image Vis. Comput. 25(3): 374-391 (2007) - [c8]Andrew I. Comport, Ezio Malis, Patrick Rives:
Accurate Quadrifocal Tracking for Robust 3D Visual Odometry. ICRA 2007: 40-45 - 2006
- [j3]Andrew I. Comport, Éric Marchand, François Chaumette:
Statistically robust 2-D visual servoing. IEEE Trans. Robotics 22(2): 415-420 (2006) - [j2]Andrew I. Comport, Éric Marchand, Muriel Pressigout, François Chaumette:
Real-Time Markerless Tracking for Augmented Reality: The Virtual Visual Servoing Framework. IEEE Trans. Vis. Comput. Graph. 12(4): 615-628 (2006) - 2005
- [b1]Andrew I. Comport:
Towards a computer imagination: Robust real-time 3D tracking of rigid and articulated objects for augmented reality and robotics. (Vers une imagination par ordinateur : Suivi robuste d'objets 3D rigides et articulés en temps réel pour la réalité augmentée et la robotique). University of Rennes 1, France, 2005 - [j1]Andrew I. Comport, Éric Marchand, François Chaumette:
Efficient model-based tracking for robot vision. Adv. Robotics 19(10): 1097-1113 (2005) - [c7]Andrew I. Comport, Danica Kragic, Éric Marchand, François Chaumette:
Robust Real-Time Visual Tracking: Comparison, Theoretical Analysis and Performance Evaluation. ICRA 2005: 2841-2846 - 2004
- [c6]Andrew I. Comport, Éric Marchand, François Chaumette:
Complex Articulated Object Tracking. AMDO 2004: 189-201 - [c5]Andrew I. Comport, Éric Marchand, François Chaumette:
Object-Based Visual 3D Tracking of Articulated Objects via Kinematic Sets. CVPR Workshops 2004: 2 - [c4]Éric Marchand, Andrew I. Comport, François Chaumette:
Improvements in Robust 2D Visual Servoing. ICRA 2004: 745-750 - [c3]Andrew I. Comport, Éric Marchand, François Chaumette:
Robust model-based tracking for robot vision. IROS 2004: 692-697 - 2003
- [c2]Andrew I. Comport, Muriel Pressigout, Éric Marchand, François Chaumette:
A visual servoing control law that is robust to image outliers. IROS 2003: 492-497 - [c1]Andrew I. Comport, Éric Marchand, François Chaumette:
A real-time tracker for markerless augmented reality. ISMAR 2003: 36-45
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-23 20:35 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint