That’s an excellent point. In base-two floating-point, the denominator must be even.
Consider decimals. If you have a finite number of digits after the decimal point, then the denominator may consist of any combination of prime factors of the base. For decimal, those prime factors are 2 and 5: 0.333 = 333/1000 = 333 / (2*2*2*5*5*5) but there is no finite-precision decimal where the denominator includes a factor of 3 or 7 or 11.
In binary, the base has only one prime factor: 2. And of course floating point can’t represent infinite precision (double-precision allows 53 bits in the mantissa).
a = 1/3;
a.high32Bits.asBinaryString(32)
a.low32Bits.asBinaryString(32)
-> 00111111110101010101010101010101
-> 01010101010101010101010101010101
=
sign bit = 0
exponent = 01111111101
mantissa = implicit 1. then 0101010101010101010101010101010101010101010101010101
So the denominator of this fraction 1.0101010101… will be a very large power of two, but it is a power of two, hence even, hence -1.pow(1/3)
cannot be computed using a system of finite-precision binary fractions.
TL;DR -1.pow(0.33333333333333)
= -1.pow(33333333333333 / 100000000000000)
which is likewise an even denominator.
TL;DR It’s hard to argue with the IEEE; you’ll probably lose
hjh