.min and .max: Why do they work the way they do?


I am somewhat confused by the way .min and .max work.
Let’s say I have the following code:


My intuition would say that with something called “.min” one would set a lower limit and with “.max” an upper limit, but considering “scope” to me it seems to be the other way round.

Could someone explain me the logic behind that?

you have it the wrong way around. min means take the lesser of two values. if you write min(x, 0), that is conceptually the same as setting a maximum of 0. graphing it might give some intuition: https://www.wolframalpha.com/input/?i=min(0%2C+x)

Ok, I see. Thanks for your answer!

1 Like