-
-
Notifications
You must be signed in to change notification settings - Fork 5.7k
Description
Hi,
Currently, the comparison operators defined for AbstractIrrational v.s. AbstractFloat is causing problems for GPUs:
The precision of AbstractIrrational is currently matched by invoking Float(x, RoundUp/Down) by default:
Lines 93 to 104 in 6e2e6d0
| <(x::AbstractIrrational, y::Float64) = Float64(x,RoundUp) <= y | |
| <(x::Float64, y::AbstractIrrational) = x <= Float64(y,RoundDown) | |
| <(x::AbstractIrrational, y::Float32) = Float32(x,RoundUp) <= y | |
| <(x::Float32, y::AbstractIrrational) = x <= Float32(y,RoundDown) | |
| <(x::AbstractIrrational, y::Float16) = Float32(x,RoundUp) <= y | |
| <(x::Float16, y::AbstractIrrational) = x <= Float32(y,RoundDown) | |
| <(x::AbstractIrrational, y::BigFloat) = setprecision(precision(y)+32) do | |
| big(x) < y | |
| end | |
| <(x::BigFloat, y::AbstractIrrational) = setprecision(precision(x)+32) do | |
| x < big(y) | |
| end |
This internally calls setprecision(BigFloat, p):
Lines 68 to 72 in 6e2e6d0
| @assume_effects :total function (t::Type{T})(x::AbstractIrrational, r::RoundingMode) where T<:Union{Float32,Float64} | |
| setprecision(BigFloat, 256) do | |
| T(BigFloat(x)::BigFloat, r) | |
| end | |
| end |
And this depends on libmpfr, which is not supported on the GPU.
This implementation has been causing problems downstream
These issues shouldn't happen when a certain AbstractIrrational's conversion is defined statically by specializing Float(BigFloat).
To fix this, we need to change the behavior of the comparison operators to check whether a specialization Float(BigFloat) exist, and then try to do dynamic precision adjustment.