中文 | English
This project implements a YOLO object detection and tracking server running on the OrangePi 5 Max development board. The server is specifically designed for real-time tracking of low-altitude drones, featuring servo-controlled real-time tracking, dual-mode (thermal infrared and visible light) image processing, remote access and control via network, and support for expansion into a multi-station array. The project is written in C++, supports cross-platform compilation, and uses the CMake build system.
- Real-time Object Detection: Uses YOLO model for object detection with customizable target types, achieving 30 FPS detection speed.
- Dual-mode Image Processing: Supports real-time processing and switching between thermal infrared and visible light images.
- Servo Control: Implements real-time target tracking using Johor bus servos, with Kalman filtering to smooth and predict target positions, reducing latency.
- Network Access: Enables remote access and control via TCP/IP protocol.
- Multi-station Array: Supports expansion into multiple base stations for collaborative tracking, improving accuracy and coverage.
- Thread Safety: Uses thread-safe queues for data transmission to ensure consistency in multi-threaded environments.
- Cross-platform Support: Built with CMake for compatibility across multiple platforms.
- OrangePi 5 Max Optimization: Specifically optimized for the OrangePi 5 Max development board to maximize hardware performance.
- RKNN Model Support: Utilizes RKNN models for efficient object detection.
- UVC Protocol Support: Compatible with UVC protocol USB cameras and thermal imagers.
This project is licensed under the GPL-3.0 License.
Before you begin, ensure you meet the following requirements:
- OrangePi 5 Max Development Board: The project is designed for the OrangePi 5 Max. Compile and run on the board.
- Operating System: Compatible Linux distribution (e.g., Ubuntu, Debian).
- Dependencies:
- CMake (>= 3.10)
- OpenCV (>= 4.5)
- RKNN SDK
- libuvc
- WiringPi
- See installation steps for detailed dependencies.
First, install the necessary dependencies:
sudo add-apt-repository ppa:jjriek/rockchip-multimedia # Add Rockchip multimedia PPA. If not added, manually install librga-dev.
sudo apt-get update
sudo apt-get install -y build-essential gcc g++ cmake ninja-build git libopencv-dev libuvc-dev libusb-1.0-0-dev zlib1g-dev librga-dev ninja-build gdb nlohmann-json3-dev libeigen3-dev libtbb-dev
# wiringpi requires separate installation
For OrangePi 5 Max, install wiringOP.
git clone https://github.com/yuunnn-w/orangepi_cv.git
cd orangepi_cv
In main.cpp
, locate the following lines:
uint16_t vid = 0x0bdc; // Vendor ID (use lsusb to check)
uint16_t pid = 0x0678; // Product ID
Replace vid
and pid
with your camera's actual vendor and product IDs. Use the lsusb
command to find this information.
In servo_driver.h
, locate the following lines:
#define SERVO_ANGLE_RANGE 360.0 // Maximum servo rotation angle
#define SERVO_ANGLE_HALF_RANGE 180.0 // Half of the maximum angle
Replace SERVO_ANGLE_RANGE
and SERVO_ANGLE_HALF_RANGE
with your servo's actual angle range. Adjust according to your servo model.
Note: This project uses Johor bus servos (official website: https://johorobot.com/). Other servo models are not currently supported.
Create a build directory and compile the project with CMake:
mkdir build
cd build
cmake ..
make -j$(nproc)
Before running, ensure all devices are connected to your OrangePi 5 Max, including:
- USB camera
- Thermal imager
- Servo
- Other sensors (if applicable)
After confirming the board is connected to the local network, start the server:
./orangepi_cv
On a Windows PC connected to the same network, navigate to the python
folder in the project directory and run:
python3 ui.py
The client is written in PyQt5. Before running, ensure Python3 and PyQt5 are installed:
pip install numpy opencv-python pyqt5 matplotlib pynput
- Start the Server: Run
./orangepi_cv
on the OrangePi 5 Max. - Start the Client: Run
python3 ui.py
on the Windows PC. - Connect to Server: The client automatically detects online base stations. Select a station and click "Start Video" to view real-time detection.
- Servo Control: Enable the servo remote switch in the client, then use the
W
,A
,S
, andD
keys to control servo movement. - Capture Frames: Click "Get Current Frame" to save the current detected image locally.
- Real-time Situational Display: Click "Real-time Situational Map" to open a window showing detected target positions.
- System Sleep: Click "System Sleep" to pause detection. Click "System Wake" to resume.
- System Shutdown: Click "System Shutdown" to exit the program (irreversible; use with caution).
Contributions are welcome! Please read CONTRIBUTING.md for guidelines.
This project is licensed under the GPL-3.0 License. See LICENSE for details.
Note: GPL-3.0 allows code modification and secondary development, but you must open-source your code and credit the original author and source.
For questions or suggestions, contact:
- Maintainer: Xiaoyang
- Email: jiaxinsugar@gmail.com
- GitHub: yuunnn-w