8000 Releases · lwtnn/lwtnn · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
8000

Releases: lwtnn/lwtnn

v2.14.1

28 Nov 13:54
Compare
Choose a tag to compare

This adds a few new features since the last release:

  • Various updates to unit tests, thanks @matthewfeickert
  • Add support for keras "add" layer, thanks @tprocter46
  • Updates to support tensorflow 2.11.0
  • Add support for 1d CNNs, thanks to @jcvoigt
  • Stop overwriting CMAKE_CXX_FLAGS, which was breaking larger projects

Version 2.13

18 Apr 14:54
Compare
Choose a tag to compare

This adds a few new features and bug fixes:

  • Add SimpleRNN layer (thanks @laurilaatu)
  • Add Python 3.10 to the CI testing matrix, and some pre-commit hooks (@matthewfeickert)
  • Allow more operations to be wrapped in keras TimeDistributed, specifically ones that are only used in training (thanks @sfranchel)
  • Update SKLearn converter: port to python 3, add support for logistic output activation, add sanity checks (thanks @TJKhoo)
  • Fix compilation error with Eigen 3.4.0, which appeared in lwtnn version 2.10.

Version 2.12.1

08 Jul 11:15
Compare
Choose a tag to compare

This is a bugfix release. It should resolve some issues some people saw where the sigmoid activation function would throw floating point overflow warnings.

To avoid the FPEs we return 0 when the sigmoid input is less than -30, and 1 when the input is greater than 30. This is the same behavior we had prior to e3622dd. The outputs should be unchanged to O(1e-6), if they change at all.

Version 2.12

01 Jul 09:16
Compare
Choose a tag to compare

This release fixes a number of bugs:

  • Properly handle InputLayer in Keras Sequential models (thanks @QuantumDancer)
  • Fix bug supporting TimeDistributed BatchNormalization layers
  • Fix bugs in lwtnn-split-keras-network.py
  • Fix some annoying warnings in compilation
  • Fixes for compilation errors in gcc11 (thanks @matthewfeickert)
  • Replace broken link for boost in the minimal install (now getting boost from a public CERN URL)

There were some tweaks to reduce element-wise calls through std::function in activation functions. This should mean a lot less pointer dereferencing.

There were also some improvements to overall code health, all from @matthewfeickert:

  • Add Github Actions based CI
  • Code linting, also add pre-commit hooks to run linting, and enforce some of this in CI
  • Expand text matrix to include C++17, C++20, and gcc11
  • Specify dependency versions more carefully, increase the range of allowed CMake versions

Version 2.11.1

07 Dec 17:31
Compare
Choose a tag to compare
10000

This release fixes bugs in the CMake code

CMake would build the project fine, but the project would be installed incorrectly. Thanks to @krasznaa for pointing out the problem.

Version 2.11

30 Nov 21:07
Compare
Choose a tag to compare

Warning: Installation with CMake doesn't work properly with this version! A fix is on the way!

Major Change: Templated Classes

The biggest change in this release is that all the core matrix classes are now templated. Thanks to @benjaminhuth, who implemented it as a way to make networks differentiable (with autodiff: https://github.com/autodiff/autodiff/). It might also be useful to lower the memory footprint of NNs by using float rather than double as the elements of Eigen matrices.

The new templated classes can be found in include/lwtnn/generic and in the lwt::generic:: namespace. To avoid breaking backward compatibility, all the old files still exist in the original location. Where possible these files contain wrappers on the new templated version. In most cases, building against these wrapper classes won't force you to include Eigen headers (as was the case before).

Major Addition: FastGraph

It turns out that looking up every input in a std::map<std::string,double> is really slow in some cases! This release adds a new interface, FastGraph which takes its inputs as std::vector<Eigen::VectorX<T>> (for scalar inputs) or std::vector<Eigen::MatrixX<T>> (for sequences), and returns an Eigen::VectorX<T>. The FastGraph interface is templated, so you can use any element type supported by Eigen.

Minor Fixes

There are quite a few fixes, mostly in the python converter code:

  • The keras converter now supports activation layers in sequences (#99)
  • For those using BUILTIN_EIGEN with CMake, bumped the version of Eigen from 3.2.9 to 3.3.7 (thanks @ductng). If you're not using BUILTIN_EIGEN this should have no effect on you.
  • Various compatibility fixes for newer versions of Keras, deprecated Travis settings, etc
  • lwtnn-split-keras-network.py no longer depends on keras

Version 2.10

13 Aug 05:32
Compare
Choose a tag to compare

This release adds a few minor things to python code. Nothing changes any C++ code but there are some fixes and extensions in the python:

  • Use a "swish" activation function which is more similar to that in keras-contrib (thanks @aghoshpub)
  • Some sequential models from more recent versions of keras were breaking keras2json.py, fixed this
  • Added a slightly more robust check for consistency between the CMake version and the tag in git

Version 2.9

18 Jun 20:43
263e042
Compare
Choose a tag to compare

This release adds several new features:

  • Inputs are read into LightweightGraph lazily. In some cases this makes it possible to evaluate parts of a multi-output graph without specifying all the inputs.
  • Sum layer to support deep sets.
  • Added an abs activation function.

We've also fixed a number of C++17 compiler warnings (thanks to @VukanJ) and added C++17 builds to the Travis build matrix.

Version 2.8.1

04 Mar 16:49
Compare
Choose a tag to compare

This is a bugfix release which only affects networks that used ELU activation functions.

Since version 2.8 the JSON files produced with kerasfunc2json.py were unreadable on the C++ side. These JSON files are now readable.

In addition, JSON files produced after this release should be readable on the C++ side in version 2.8.

Version 2.8

10 Nov 17:41
Compare
Choose a tag to compare

This release introduces several parameterized activation functions:

0