this post was submitted on 15 Sep 2024
21 points (100.0% liked)

Programming

423 readers
5 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] scarecrw@lemmy.one 5 points 2 months ago* (last edited 2 months ago) (2 children)

Why restrict to 54-bit signed integers? Is there some common language I'm not thinking of that has this as its limit?

Edit: Found it myself, it's the range where you can store an integer in a double precision float without error. I suppose that makes sense for maximum compatibility, but feels gross if we're already identifying value types. I don't come from a web-dev/js background, though, so maybe it makes more sense there.

[–] lolcatnip@reddthat.com 3 points 2 months ago

I didn't think you realize just how much code is written in JavaScript these days.

[–] lysdexic@programming.dev 2 points 2 months ago* (last edited 2 months ago)

Why restrict to 54-bit signed integers?

Because number is a double, and IEEE754 specifies the mantissa of double-precision numbers as 53bits+sign.

Meaning, it's the highest integer precision that a double-precision object can express.

I suppose that makes sense for maximum compatibility, but feels gross if we’re already identifying value types.

It's not about compatibility. It's because JSON only has a number type which covers both floating point and integers, and number is implemented as a double-precision value. If you have to express integers with a double-precision type, when you go beyond 53bits you will start to experience loss of precision, which goes completely against the notion of an integer.