8000 GitHub - andrewmm511/Cuda-pHash: Highly-optimized perceptual hash calculations on GPU
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

andrewmm511/Cuda-pHash

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Overview

CUDA pHash is a high-performance, GPU-accelerated tool for computing the perceptual hash (pHash) of images.

CUDA pHash outperforms other leading perceptual hash implementations by a wide margin, processing massive datasets via highly-optimized compute pipelines.

Performance Benchmark 🚀

Implementation Wall clock Time (ms)* ⏱️ Speedup vs. CUDA ⚡
⚡ CUDA pHash 000.000 Baseline 🏆
OpenCV pHash 000.000 0.0× slower 🐢
Python pHash 000.000 0.0× slower 🐢

* pHash calculation on COCO 2017 (163,957 images) using 13th Gen Intel i9 13900K and NVIDIA RTX 3080 over PCIe.

Quick Start

Below is a simple example demonstrating how to instantiate a CUDA pHash object, compute hashes, and compare them using Hamming distance:

#include "cuda_phash.cuh"
#include <iostream>
#include <vector>
#include <string>

// Computes Hamming distance between two bit-packed hash vectors
int hammingDistance(const std::vector<uint32_t>& hashA, const std::vector<uint32_t>& hashB) {
    int distance = 0;
    for (size_t i = 0; i < hashA.size(); ++i) {
        uint32_t diff = hashA[i] ^ hashB[i]; // XOR to find differing bits
        while (diff) {
            distance += (diff & 1);
            diff >>= 1;
        }
    }
    return distance;
}

int main() {
    // Initialize CUDA pHash with:
    // - hashSize = 8
    // - highFreqFactor = 4
    // - batchSize = 5000
    CudaPhash phasher(8, 4, 5000);

    // Image paths to process
    std::vector<std::string> imagePaths = { "image1.jpg", "image2.jpg", "image3.jpg" };

    // Compute perceptual hashes
    std::vector<std::vector<uint32_t>> hashes = phasher.phash(imagePaths);

    // Compute Hamming distance between image1 and image2
    int dist = hammingDistance(results[0], results[1]);
    std::cout << "Hamming distance between image1 and image2: " << dist << std::endl;

    return 0;
}

Hash Output Format

The phash() function returns a std::vector<std::vector<uint32_t>>, where each inner vector represents the bit-packed hash of an image. If desired, these can be converted to binary or hexadecimal representations by iterating over the 32-bit words.

Optimized Perceptual Hashing via GPU Acceleration

1. Introduction

Perceptual Hashing (pHash) is a technique commonly used for identifying visually similar images while being resilient to transformations such as scaling, rotation, and lighting changes. Unlike cryptographic hashes, which produce radically different outputs for even minor input changes, pHash captures an image's structural essence, allowing for robust similarity comparisons.

Despite its widespread adoption in image retrieval and duplicate detection, existing pHash implementations are often limited by computational efficiency, especially when scaling to large datasets. The standard approach relies on the Discrete Cosine Transform (DCT) and mean-thresholding, but traditional CPU-based implementations are computationally expensive. To address these limitations, we propose a highly optimized GPU-accelerated pHash implementation that significantly improves efficiency by leveraging CUDA and cuBLAS.

2. Methodology

2.1 Discrete Cosine Transform (DCT)

The core of pHash computation involves transforming an image into the frequency domain using the 2D Discrete Cosine Transform (DCT):

$$ DCT(u,v) = \alpha(u) \alpha(v) \sum_{x=0}^{N-1} \sum_{y=0}^{N-1} f(x,y) \cos\left(\frac{\pi}{N} (x + 0.5) u \right) \cos\left(\frac{\pi}{N} (y + 0.5) v \right) $$

where $$\alpha(u)$$ is defined as:

$$ \alpha(u) = \begin{cases} \frac{1}{\sqrt{N}}, & u = 0 \\ \sqrt{\frac{2}{N}}, & u > 0 \end{cases} $$

By retaining only the low-frequency coefficients (top-left sub-block), we extract structural features that remain invariant under common transformations such as rotation, scaling, and lighting changes.

2.2 Mean Thresholding & Bit Packing

To generate a compact binary hash:

  1. An n × n subset of DCT coefficients is selected.
  2. The mean of these coefficients is computed.
  3. Each coefficient is compared to the mean:
    • 1 if the coefficient is above the mean
    • 0 otherwise
  4. The resulting binary sequence is bit-packed into 32-bit words for memory-efficient storage and rapid comparison.

2.3 GPU Optimization Strategies

Standard CPU-based implementations of pHash suffer from high computational costs, particularly during the DCT and thresholding steps. While a GPU can accelerate DCT computations significantly, one of the key challenges is in the time required to transfer images to the GPU over PCIe lanes, which erodes these efficiency gains. This data transfer bottleneck is often the primary reason that existing GPU-based pHash implementations have not achieved their full potential.

To address this, we implemented several CUDA-specific optimizations to minimize overhead and maximize performance:

  • Batched DCT Computation: We leverage cuBLAS to perform matrix-matrix multiplications in parallel across multiple images, resulting in highly-optimized DCT times of less than 40ms for a batch of 5,000 256x256 images.
  • Efficient Memory Management:
    • Pinned (page-locked) memory facilitates faster CPU-GPU transfers.
    • Shared memory and warp-wide ballot operations reduce redundant computations.
  • Asynchronous Execution: We implement overlapping pipelines to concurrently execute image loading, preprocessing, and hashing, effectively eliminating bottlenecks.
  • Optimized Data Transfer Strategies:
    • Minimized PCIe Transfers: By structuring computations to reduce unnecessary memory copies, we significantly decrease PCIe latency.
    • Double Buffering: This technique allows image preprocessing and GPU computation to overlap, hiding data transfer delays.

3. Performance and Scalability

Empirical evaluations show that our GPU-accelerated approach outperforms traditional CPU implementations by orders of magnitude, particularly for large-scale datasets. By parallelizing DCT computation and optimizing memory access patterns, we achieve substantial speedups while maintaining high accuracy and robustness in perceptual similarity detection.

4. Conclusion

We present an optimized pHash implementation that harnesses GPU acceleration to dramatically improve efficiency without sacrificing robustness. This approach enables large-scale image processing applications, including duplicate detection, content-based retrieval, and near-duplicate search, to scale effectively with growing data demands. Future work includes extending the framework to support additional perceptual hashing techniques and further optimizing CUDA kernels for specific hardware architectures.

Installation

1. Clone or Download the Repository

git clone https://github.com/yourusername/cuda_phash.git
cd cuda_phash

2. Install Dependencies

Ensure the following are installed:

  • CUDA Toolkit (tested with CUDA 11+)
  • cuBLAS (included with CUDA)
  • OpenCV (for image loading and preprocessing)
  • Modern C++ Compiler (e.g., MSVC, GCC, or Clang)

3. Build from Source

Since no CMake configuration is provided, build the project using your preferred method:

Windows (Visual Studio)

  1. Create a new Visual C++ project.
  2. Add all .cu and .cuh files.
  3. Configure project settings to enable CUDA:
    • Use "CUDA Runtime API" in VS properties.
  4. Link against:
    • cublas.lib
    • cudart.lib
    • OpenCV libraries (e.g., opencv_world455.lib).

Linux / macOS (Command Line)

Compile using nvcc and link required libraries:

nvcc -o cuda_phash main.cu cuda_phash.cu -lcublas `pkg-config --cflags --libs opencv4`

Adjust paths and library flags to match your system configuration.

4. Run the Program

Once compiled, include cuda_phash.cuh in your project and ensure CUDA Toolkit and OpenCV are correctly linked at runtime.

About

Highly-optimized perceptual hash calculations on GPU

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published
0