Should SuperCollider Show Full Floating-Point Precision? Time for a Change?

Continuando a discussão de Slope and derivative:

SuperCollider’s behavior with floating-point numbers appears unique. Unlike other programming languages, sclang seems to automatically round numbers to 6 decimal places in its display output, effectively masking the underlying floating-point representation. While this makes the output look cleaner, several devs/contributors (not just me) have concerns about this approach since it obscures the actual state of floating-point values in the system.

This behavior differs significantly from how other languages handle floating-point displays, and I’m curious about the historical reasons behind this design choice in SuperCollider. Given our current discussion about floating-point artifacts and peculiarities, perhaps this would be a good time to examine this aspect of the language more closely.

I’d be particularly interested in hearing others’ thoughts on:

  • The benefits and drawbacks of hiding floating-point precision
  • Whether this behavior affects or creates unexpected cases
  • Should we consider making the full floating-point representation more accessible or standard?

examples

Haskell:

Prelude> 0.3 + 0.1 + 0.2
0.6000000000000001
Prelude> 0.1 + 0.1 + 0.1
0.30000000000000004

Python:

>>> 0.3 + 0.1 + 0.2
0.6000000000000001
>>> 0.1 + 0.1 + 0.1
0.30000000000000004

SCLang:

0.3 + 0.1 + 0.2  
-> 0.6
0.1 + 0.1 + 0.1  
-> 0.3 

But it is deceptive; errors exist nevertheless:

0.1 + 0.1 + 0.1 == 0.3 
-> false

Perl 6/Raku is an interesting exception among programming languages because it stores values as rational numbers:

> 0.1 + 0.1 + 0.1
0.3
> 0.1 + 0.1 + 0.1 == 0.3
True
> -7/3 == -(.1 + 1/3 + 1.9)
True

@julian @scztt @Spacechild1 @jamshark70 @muellmusik @jordan @semiquaver @PitchTrebler @josh @adc

We tried it once. Users panicked. We changed it back.

hjh

1 Like

For that slope use case it would be wonderful :slight_smile: Most of the stuff im doing or talking about is based on deriving slopes from ramps.

1 Like

We have several interrelated representations:

  1. strings posted and strings typed (REPL level), e.g. "0.3".
  2. number objects (sclang level, which handle message dispatch, 0.3 + 0.3)
  3. byte representations (floating point numbers, which provide the resolution)
  4. They all represent various aspects of mathematical numbers (which in turn may represent other things, like positions, frequencies, or whatever).

The way it is currently done is that the string representation (1) symbolises the object (2), which represents two things:

  • the calulation state (3) precisely,
  • the mathematical number (4) algebraically and approxiamtely.

You are suggesting (1) should represent (3) more closely.

Currently, the string (1) is just a handle to the object (2). We all (should) know that you shouldn’t compare two floats by equality. Displaying them with a higher resolution may educate about this, but is not necessary anymore, once you know. Then it is just a little tedious?

(Strictly speaking, there are many ways to map the number continuum to something you can reckon with. We are dealing with a special case anyhow.)

I can’t imagine a supercollider user panick ! :slight_smile:

Happy christmas.

1 Like

[chuckles]
Sad to hear that this was the case about users.
Oh well.
Me personally I enjoy precision. I find it to be very blingy. I like doing this for example:

calcFreq: Pfunc{|ev|ev.use{ev.freq.asStringPrec(48).postln}},
frq: Pfunc{|ev|ev.use{ev.freq}},

I think it is very nice to see this kind of thing in a running piece of Pattern code. I would use even higher precision if it was possible. It seems that 48 is the max for .asStringPrec but maybe you know another way James?

SC is not unique in this respect, Pd also does it! There also have been several discussions about it.

Actually, both the printf family and I/O streams allow to set the precision at runtime, so we could make it an interpreter option. Personally, I would keep the current behavior, but if people want to see more digits, they’d only have to change a setting in the IDE.

3 Likes

I was thinking this – a Process.printFloatMaxDigits option or such.

:wink: May have been too strong a word – but it was not a popular change when we raised the print precision before.

You could hack the source code I guess.

hjh

1 Like

The language I am currently working on prints the most brief string representation which will have a bit identical round trip.

3 Likes

Are you referring to the Ryū float-to-string conversion? I’ve seen this on a base module in Haskell. It seems pretty interesting.

Both brief and precise, it seems to require a look-up table for powers of 5, which makes it fast, too.

I think that is good at the moment:

a = 0.1 + 0.1 + 0.1 // -> 0.3
b = 0.3             // -> 0.3
a == b              // -> false
a.dump              //   Float 0.300000   33333334 3FD33333
b.dump              //   Float 0.300000   33333333 3FD33333

An exact representation does not seem to be practical in this case:

x = 1e-15; a.round(x) == b.round(x) // -> true
x = 1e-16; a.round(x) - b.round(x)  // -> 1.1102230246252e-16
x = 1e-16; a.round(x) - b.round(x)  // -> 1.1102230246252e-16

The source has some issues re: platform-dependent behavior and other edge cases depending on the compiler/OS. There is a PR discussing this, and it seems to me that std::setprecision(precision) is an acceptable alternative for what .asStringPrec tries to achieve, with a user-defined (std::fixed, std::scientific, or std::defaultfloat).

Another fascinating approach was mentioned in this thread; maybe we should check it out, too.

Which is, I suppose, sclang should be doing when you call asCompileString.

I think getting the shortest string that guarantees an exact round-trip back to precisely the same binary floating point number is not the same thing.

yes, exactly – this is why I mentioned asCompileString.
@asynth is there a simple way to do this or is there a lot of trickery to be done?

1 Like

No, my method is slow. I try precisions with a binary search.

1 Like

For reference, I mentioned this one in the other post: GitHub - ulfjack/ryu: Converts floating point numbers to decimal strings

Well. this could be implemented. The Ryu code is well-tested and faster than the regular way. How complicated would it be to include it as a primitive?

It’s not extremely hard, you’d need to read the exiting code for primitives and make a pull request on the basis of this. Since the ryu code for doubles is not exactly short, it could be separately written in an extra file.

2 Likes

Yes, it was a little bit rhetorical. Adding an extra file, it could either be the default output (maybe just hacking prettyFormatFloat, changing sprintf for something like d2s_buffered) or an additional method along the lines of asStringPrec.

I guess the question is whether this is a desirable design change. It will enhance consistency and performance. I don’t see a technical downside, but it is not a priority either.

This seems like a sensible approach to me.

Matlab handles this with a command format short and format long in the command window (default short). I find this convenient for the (rare) times I want to observe the full precision. So having a similar flag available in the language, or check box in the post window docklet, could also be nice.

2 Likes