-
-
Notifications
You must be signed in to change notification settings - Fork 31k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fused multiply-add: proposal to add math.fma() #73468
Comments
Fused multiply-add (henceforth FMA) is an operation which calculates the product of two numbers and then the sum of the product and a third number with just one floating-point rounding. More concretely: r = x*y + z The value of Even though one FMA CPU instruction might be calculated faster than the two separate instructions for multiply and add, its main advantage comes from the increased precision of numerical computations that involve the accumulation of products. Examples which benefit from using FMA are: dot product [2], compensated arithmetic [3], polynomial evaluation [4], matrix multiplication, Newton's method and many more [5]. C99 includes [6] This proposal is then about adding new math.fma(x, y, z)
'''Return a float representing the result of the operation `x*y + z` with single rounding error, as defined by the platform C library. The result is the same as if the operation was carried with infinite precision and rounded to a floating-point number.''' Attached is a simple module for Python 3 demonstrating the fused multiply-add operation. On my machine,
[1] https://en.wikipedia.org/wiki/Multiply%E2%80%93accumulate_operation |
Thread on python-ideas: |
What's the point of adding this in the math module rather than a more specialized library like Numpy? |
I would say because it has wide applicability, especially considering the amount of code it adds. It is similar in spirit to |
I don't know. If I want to compute a dot product, the first thing I'll do is import Numpy and then use the |
The performance argument unlikely is applicable in this case. I suppose that an overhead of function call in Python make two operators faster than one function call. Alternatives to fma() for exact computations are integer arithmetic (if all values can be scaled to integers), fractions and decimal numbers. But since fma() is a part of C (C99), C++ (C++11) and POSIX (POSIX.1-2001) standards for long time, I don't have objections against including it in the math module. |
Agreed. This is mainly about accuracy, not speed: the FMA operation is a fundamental building block for floating-point arithmetic, is useful in some numerical algorithms, and essential in others (especially when doing things like double-double arithmetic). It would be valuable to have when prototyping numerical algorithms in pure Python. Given that it's supported in C99 and on current Windows, I'm +1 on including it in the math module. Note that implementing this it not quite as straightforward as simply wrapping the libm version, since we'll also want the correct exceptional behaviour, for consistency with the rest of the math module: i.e., we should be raising ValueError where the fma operation would signal the invalid FPE, and OverflowError where the fma operation would signal the overflow FPE. |
An implementation note: IEEE 754-2008 leaves it to the implementation to decide whether FMA operations like:
and
(where nan represents a quiet NaN and the inf and 0 can have arbitrary signs) signal the invalid operation FPE or not. (Ref: 7.2(c) in IEEE 754-2008.) I'd suggest that in this case we follow what Intel does in its x64 chips with FMA3 support.(according to ). If I'm reading the table in section 2.3 of the Intel Advanced Vector Extensions Programming Reference correctly, Intel does *not* signal the invalid operation FPE in this case. That is, we're following the usual rule of quiet NaN in => quiet NaN out with no exception. This does unfortunately conflict with the IBM decimal specification and Python's decimal module, where these operations *do* set the "invalid" flag (see the spec, and test fmax0809 in the decimal test set). |
Isn't the behaviour of quiet NaNs kindof implementation-dependent already? |
Not as far as IEEE 754-2008 is concerned, and not as far as Python's math module is concerned, either: handling of special cases is, as far as I know, both consistent across platforms and compliant with IEEE 754. That's not true across Python as a whole, but it should be true for the math module. If you find an exception to the above statement, please do open a bug report! |
Here's a pure Python reference implementation, with tests. |
And here's a patch. |
LGTM except that needed the versionadded directive and What's New entry. And maybe few additional tests. |
Serhiy, Victor: thanks for the reviews. Here's a new patch. Differences w.r.t. the old one:
|
Whoops; looks like I failed to attach the updated patch. Here it is. |
LGTM except that lines in What's New are too long. |
Thanks. Fixed (I think). I'm not sure what the limit is, but the lines are now all <= 79 characters long. |
Ah, the dev guide says 80 characters. (https://docs.python.org/devguide/documenting.html) |
Then LGTM unconditionally. |
Thanks, Serhiy. I'm going to hold off committing this for 24 hours or so, because I want to follow the buildbots when I do (and I don't have time for that right now). I wouldn't be at all surprised to see platform-specific test failures. |
New changeset b33012ef1417 by Mark Dickinson in branch 'default': |
Cross fingers... |
Failures on the Windows buildbot (http://buildbot.python.org/all/builders/AMD64%20Windows8.1%20Non-Debug%203.x/builds/238/steps/test/logs/stdio) shown below. It looks as though Windows is emulating the FMA operation on this machine (and not doing a particularly good job of it). That means that if we want to support Windows (and we do), we may have to emulate ourselves, preferably using something a bit more efficient than the fractions.Fraction module. I'll let the buildbots complete, to see what else fails, and then roll back the commit. The patch clearly isn't good enough in its current state. ====================================================================== Traceback (most recent call last):
File "D:\buildarea\3.x.ware-win81-release\build\lib\test\test_math.py", line 1565, in test_fma_overflow
self.assertEqual(math.fma(a, b, -c),
OverflowError: overflow in fma ====================================================================== Traceback (most recent call last):
File "D:\buildarea\3.x.ware-win81-release\build\lib\test\test_math.py", line 1524, in test_fma_zero_result
self.assertIsNegativeZero(math.fma(tiny, -tiny, 0.0))
File "D:\buildarea\3.x.ware-win81-release\build\lib\test\test_math.py", line 1642, in assertIsNegativeZero
msg="Expected a negative zero, got {!r}".format(value)
AssertionError: False is not true : Expected a negative zero, got 0.0 ====================================================================== Traceback (most recent call last):
File "D:\buildarea\3.x.ware-win81-release\build\lib\test\test_math.py", line 1623, in test_random
self.assertEqual(math.fma(a, b, c), expected)
AssertionError: 0.5506672157701096 != 0.5506672157701097 |
New changeset b5a5f13500b9 by Mark Dickinson in branch 'default': |
Also failures on Gentoo: here b is positive (possibly +inf), and c is finite, so we expect an infinite result. Instead, we're apparently getting a NaN. I don't have a good guess about what's causing this: the rest of the tests are passing, so it's unlikely that we're using a bad FMA emulation. Maybe an optimization bug? ====================================================================== Traceback (most recent call last):
File "/buildbot/buildarea/3.x.ware-gentoo-x86.installed/build/target/lib/python3.7/test/test_math.py", line 1482, in test_fma_infinities
self.assertEqual(math.fma(math.inf, b, c), math.inf)
ValueError: invalid operation in fma |
The patch needs tests for the case where a*b overflows and c is infinite (either of the same sign as a*b or not). This combination should never return NaN, but a poor emulation of fma might do so. |
Do I read this thread correctly assuming that this hasn't been implemented yet? If not, I would probably make my own little library for this -- I really need the feature for the precision. |
Yes. Existing libm implementations don't work, so simply wrapping the libm function isn't enough. And writing an implementation from scratch is non-trivial. |
Okay. Is this because of the inf/NaN discrimination hiccups mentioned above or are there any other pitfalls? |
No, more than that. If it were just that, we could work around it by adding the appropriate special case handling before calling the libm fma (as has been done, reluctantly, with some of the other math module functions; see the source for math.pow, for example). But the fma implementation on Windows is fundamentally broken. For finite numbers, it simply doesn't give what it's supposed to (a * b + c, computed with a _single_ rounding). Since that single rounding is most of the point of fma, that makes the libm fma not fit for purpose on Windows. It _is_ possible, with care, to code up a not-too-inefficient fma emulation using clever tricks like Veltkamp splitting and Dekker multiplication. I have half such an implementation sitting on my home computer, but so far have not had the cycles to finish it (and it's not high on the priority list right now). |
Okay, thanks for the info. As a stop-gap measure, I've created pyfma [1, 2]. Install with
and use with
Only works on Unix reliable then, but that's all I care about. :) Cheers, [1] https://github.com/nschloe/pyfma |
I converted https://hg.python.org/cpython/rev/b33012ef1417 written by Mark Dickinson into a GitHub PR: PR 17987. I still expect tests failures. I plan to use the PR as a starting point to implement math.fma(). If tests continue to fail on some platforms, I plan to manually handle NaN and INF in the C code, before calling libc fma(). |
For Windows, you need to do much more than this: it's not just about handling NaNs and infinities, it's about reimplementing the entire function from scratch to give correctly rounded results. Without correctly-rounded results, there's very little point in having fma. If it were a couple of niche platforms that gave bad results, then we could push this through. But it's Windows. :-( |
Okay, looks like Windows is happy in the PR's continuous integration. If the buildbots are also happy, then I'm content to have this pushed through. |
Would it make sense to only make the function available on non-Windows platforms? As we do for other Unix-only functions in the os module. Maybe even skip more platforms if they provide a broken implementation. We could implement a test suite in configure to decide if fma() fits our requirements or not. |
FWIW, there is a new implementation of FMA [1] which is licensed very permissively [2]. Perhaps it could be used here as well..? [1] https://github.com/smasher164/fma |
FYI : |
It's worth noting that if you want a performant and open source implementation, you have:
I believe there is also one available from netlib, but I don't have that link handy. Arm typically has algorithms available on its repository, but doesn't provide one for |
@tannergooding double(64bits float) version is desired, maybe : |
Yes, if you want the 64-bit ( |
Added new math.fma() function, wrapping C99's ``fma()`` operation: fused multiply-add function. Co-Authored-By: Mark Dickinson <mdickinson@enthought.com>
My previous attempt in 2020 failed. I try again in 2024: PR #116667 |
Added new math.fma() function, wrapping C99's ``fma()`` operation: fused multiply-add function. Co-Authored-By: Mark Dickinson <mdickinson@enthought.com>
Added new math.fma() function, wrapping C99's ``fma()`` operation: fused multiply-add function. Co-Authored-By: Mark Dickinson <mdickinson@enthought.com>
Added new math.fma() function, wrapping C99's ``fma()`` operation: fused multiply-add function. Co-Authored-By: Mark Dickinson <mdickinson@enthought.com>
Added new math.fma() function, wrapping C99's ``fma()`` operation: fused multiply-add function. Co-authored-by: Mark Dickinson <mdickinson@enthought.com>
Function added by commit 8e3c953. I only took 7 years only Linux, macOS and Windows get a good support of fma() :-) |
Added new math.fma() function, wrapping C99's ``fma()`` operation: fused multiply-add function. Co-authored-by: Mark Dickinson <mdickinson@enthought.com>
Added new math.fma() function, wrapping C99's ``fma()`` operation: fused multiply-add function. Co-authored-by: Mark Dickinson <mdickinson@enthought.com>
Added new math.fma() function, wrapping C99's ``fma()`` operation: fused multiply-add function. Co-authored-by: Mark Dickinson <mdickinson@enthought.com>
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
Linked PRs
The text was updated successfully, but these errors were encountered: