-
DGIST(Daegu Gyeongbuk Institute of Science and Technology)
- Daegu
-
12:35
(UTC +09:00) - https://www.linkedin.com/in/vvon-joon
Highlights
- Pro
Stars
[Information Fusion 2025] A Survey on Occupancy Perception for Autonomous Driving: The Information Fusion Perspective
Vision-based 3D occupancy prediction in autonomous driving: a review and outlook
[ECCV 2022] This is the official implementation of BEVFormer, a camera-only framework for autonomous driving perception, e.g., 3D object detection and semantic map segmentation.
[CVPR 2025] Depth Any Camera: Zero-Shot Metric Depth Estimation from Any Camera
[NeurIPS 2024] Depth Anything V2. A More Capable Foundation Model for Monocular Depth Estimation
[CoRL 2022] SurroundDepth: Entangling Surrounding Views for Self-Supervised Multi-Camera Depth Estimation
Depth Pro: Sharp Monocular Metric Depth in Less Than a Second.
DeepRacer Vision Timer provides a vision-based AI automatic timing system and timer for offline competitions using AWS DeepRacer.
CAVIS: Context-Aware Video Instance Segmentation
PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
[CVPR 2024] Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data. Foundation Model for Monocular Depth Estimation
Understanding compiler - CNU, Fall 2021
📘 The experiment tracker for foundation model training
Race-line calculation of a DeepRacer track
Reward function for training AWS Deepracer based on Reinforcement Learning
Creates an AWS DeepRacing training environment which can be deployed in the cloud, or locally on Ubuntu Linux, Windows or Mac.
팀 갑천다이빙장인의 솔루션 저장을 위한 레포입니다.
🇰🇷파이토치에서 제공하는 튜토리얼의 한국어 번역을 위한 저장소입니다. (Translate PyTorch tutorials in Korean🇰🇷)