Highlights
- Pro
Stars
This is the official implementation of "LSK3DNet: Towards Effective and Efficient 3D Perception with Large Sparse Kernels" (Accepted at CVPR 2024).
π Geometric Computer Vision Library for Spatial AI
PyTorch native quantization and sparsity for training and inference
Codebase for Image Classification Research, written in PyTorch.
Curated list of project-based tutorials
Dataframes powered by a multithreaded, vectorized query engine, written in Rust
FastAPI framework, high performance, easy to learn, fast to code, ready for production
π€ Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
A complement to pgvector for high performance, cost efficient vector search on large workloads.
Open-source vector similarity search for Postgres
CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
This is the official repository for our recent work: PIDNet
A playbook for systematically maximizing the performance of deep learning models.
PyTorch Lightning + Hydra. A very user-friendly template for ML experimentation. β‘π₯β‘
You like pytorch? You like micrograd? You love tinygrad! β€οΈ
[CoRL 2022] InterFuser: Safety-Enhanced Autonomous Driving Using Interpretable Sensor Fusion Transformer
Create, manipulate and convert representations of position and orientation in 2D or 3D using Python
Github repository of a Visio-tactile Implicit Representations of Deformable Objects (ICRA 2022)
We write your reusable computer vision tools. π
[ECCV 2024] Official implementation of the paper "Semantic-SAM: Segment and Recognize Anything at Any Granularity"
π Awesome lists about all kinds of interesting topics
Segment Anything in High Quality [NeurIPS 2023]
CLIP Surgery for Better Explainability with Enhancement in Open-Vocabulary Tasks
[ECCV 2024] Official implementation of the paper "Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection"
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.