[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

🧠 A fast and clean supervised neural network in C++, capable of effectively using multiple cores

Notifications You must be signed in to change notification settings

mrousavy/BrabeNetz

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

BrabeNetz

Brain - Image by medium.com

BrabeNetz is a supervised neural network written in C++, aiming to be as fast as possible. It can effectively multithread on the CPU where needed, is heavily performance optimized and is well inline-documented. System Technology (BRAH) TGM 2017/18

NuGet Download on NuGet

PM> Install-Package BrabeNetz

I've written two examples of using BrabeNetz in the Trainer class to train a XOR ({0,0}=0, {0,1}=1, ..) and recognize handwritten characters.

In my XOR example, I'm using a {2,3,1} topology (2 input-, 3 hidden- and 1 output-neurons), but BrabeNetz is scalable until the hardware reaches its limits. The digits recognizer is using a {784,500,100,10} network to train handwritten digits from the MNIST DB.

Be sure to read the network description, and check out my digit recognizer written in Qt (using a trained BrabeNetz MNIST dataset)

Benchmarks

Build: Release x64 | Windows 10 64bit

CPU: Intel i7 6700k @ 4.0GHz x 8cores

RAM: HyperX Fury DDR4 32GB CL14 2400MHz

SSD: Samsung 850 EVO 540MB/s

Commit: 53328c3

Console output in digit recognition

Actual prediction of the digit recognizer network on macOS Mojave

Console output with elapsed time (2ms)

Training a XOR 1000 times takes just 0.49ms

Actual trained network prediction output for digit recognition

Actual prediction of the digit recognizer network on Debian Linux

Using 24/24 cores in Taskmanager

Effectively using all available cores (24/24, 100% workload)

Running on Linux (Task View - htop)

Task Resource viewer (htop) on Linux (Debian 9, Linux 4.9.62, KDE Plasma)

Specs

  • Optimized algorithms via raw arrays instead of std::vector and more
  • Smart multithreading by OpenMP anywhere the spawn-overhead is worth the performance gain
  • Scalability (Neuron size, Layer count) - only limited by hardware
  • Easy to use (Inputs, outputs)
  • Randomly generated values to begin with
  • Easily binary save/load with network::save(string)/network::load(string) (state.nn file)
  • Sigmoid squashing function
  • Biases for each neuron
  • network_topology helper objects for loading/saving state and inspecting network
  • brabenetz wrapper class for an easy-to-use interface

Usage

Example Usage Code

  1. Build & link library

  2. Choose your interface

    1. brabenetz.h: [Recommended] A wrapper for the raw network.h interface, but with error handling and modern C++ interface styling such as std::vectors, std::exceptions, etc.
    2. network.h: The raw network with C-style arrays and no bound/error checking. Only use this if performance is important.
  3. Constructors

    1. (initializer_list<int>, properties): Construct a new neural network with the given network size (e.g. { 2, 3, 4, 1 }) and randomize all base weights and biases - ref
    2. (network_topology&, properties): Construct a new neural network with the given network topology and import it's existing weights and biases - ref
    3. (string, properties): Construct a new neural network with and load the neural network state from the file specified in properties.state_file - ref
  4. Functions

    1. network_result brabenetz::feed(std::vector<double>& input_values): Feed the network input values and forward propagate through all neurons to estimate a possible output (Use the network_result structure (ref) to access the result of the forward propagation, such as .values to view the output) - ref
    2. double network_result::adjust(std::vector<double>& expected_output): Backwards propagate through the whole network to adjust wrong neurons for result trimming and return the total network error - ref
    3. void brabenetz::save(string path): Save the network's state to disk by serializing weights - ref
    4. void brabenetz::set_learnrate(double value): Set the network's learning rate. It is good practice and generally recommended to use one divided by the train count, so the learn rate decreases the more often you train - ref
    5. network_topology& brabenetz::build_topology(): Build and set the network topology object of the current network's state (can be used for network visualization or similar) - ref

Usage examples can be found here, and here

Thanks for using BrabeNetz!

Buy Me a Coffee at ko-fi.com