You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As someone less familiar with the internals of AMGX, and more with the community-driven interfaces, where might one start in the code to make this addition?
The text was updated successfully, but these errors were encountered:
Given the recent work on lower precision floating point being done by the PyTorch community, (See here: https://pytorch.org/blog/training-using-float8-fsdp2/?utm_content=317436495&utm_medium=social&utm_source=linkedin&hss_channel=lcp-78618366) has there been any thoughts on if AMGX will follow suite and support floats with less precision than 32 and 64?
As someone less familiar with the internals of AMGX, and more with the community-driven interfaces, where might one start in the code to make this addition?
The text was updated successfully, but these errors were encountered: