this post was submitted on 02 Oct 2023
183 points (100.0% liked)

Programming

423 readers
10 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 1 year ago
MODERATORS
top 42 comments
sorted by: hot top controversial new old
[–] simple@lemm.ee 28 points 1 year ago* (last edited 1 year ago) (6 children)
[–] xoggy@programming.dev 23 points 1 year ago (4 children)

And the site's dark mode is fantastic...

[–] snooggums@kbin.social 8 points 1 year ago (1 children)
[–] kambusha@feddit.ch 3 points 1 year ago

Lol, who turned the lights out?

[–] Virkkunen@kbin.social 3 points 1 year ago

This one really got a laugh out of me

load more comments (1 replies)
[–] flamingos@feddit.uk 13 points 1 year ago (1 children)

Thank god for reader view because this makes me feel physically sick to look at.

[–] hazelnoot 2 points 1 year ago

Right?? I normally love it when websites have a fun twist, but this one really needs an off button. The other cursors keep covering the text and it becomes genuinely uncomfortable to read. Fortunately, you can easily block the WS endpoint with any ad blocker.

[–] interolivary 6 points 1 year ago
[–] redcalcium@lemmy.institute 4 points 1 year ago

I love it. People should be having more fun with their own personal sites.

[–] Blackmist@feddit.uk 1 points 1 year ago

Is that other readers' mouse pointers?

load more comments (1 replies)
[–] atheken@programming.dev 14 points 1 year ago (2 children)

Unicode is thoroughly underrated.

UTF-8, doubly so. One of the amazing/clever things they did was to build off of ASCII as a subset by taking advantage of the extra bit to stay backwards compatible, which is a lesson we should all learn when evolving systems with users (your chances of success are much better if you extend than to rewrite).

On the other hand, having dealt with UTF-7 (a very “special” email encoding), it takes a certain kind of nerd to really appreciate the nuances of encodings.

[–] Jummit@lemmy.one 6 points 1 year ago (1 children)

I've recently come to appreciate the "refactor the code while you write it" and "keep possible future changes in mind" ideas more and more. I think it really increases the probability that the system can live on instead of becoming obsolete.

[–] Pantoffel@feddit.de 1 points 1 year ago

Yes, but once code becomes too spaghetti such that a "refactor while you write it" becomes too time intensive and error prone, it's already too late.

[–] JackbyDev@programming.dev 3 points 1 year ago* (last edited 1 year ago)

Unrelated, but what do you think (if anything) might end up being used by the last remaining reserved bit in IP packet header flags?

https://en.wikipedia.org/wiki/Evil_bit

https://en.wikipedia.org/wiki/Internet_Protocol_version_4#Header

[–] Obscerno@lemm.ee 12 points 1 year ago (1 children)

Man, Unicode is one of those things that is both brilliant and absolutely absurd. There is so much complexity to language and making one system to rule them all ends up involving so many compromises. Unicode has metadata for each character and algorithms dealing with normalization and capitalization and sorting. With human language being as varied as it is, these algorithms can have really wacky results. Another good article on it is https://eev.ee/blog/2015/09/12/dark-corners-of-unicode/

And if you want to RENDER text, oh boy. Look at this: https://faultlore.com/blah/text-hates-you/

[–] emptyother@programming.dev 3 points 1 year ago

Oh no, we've been hacked! Theres chinese character in the event log! Or was it just unicode?

The entire video is worth watching, the history of "Plain text" from the beginning of computing.

[–] Knusper@feddit.de 11 points 1 year ago

They believed 65,536 characters would be enough for all human languages.

Gotta love these kind of misjudgements. Obviously, they were pushing against pretty hard size restrictions back then, but at the same time, they did have the explicit goal of fitting in all languages and if you just look at the Asian languages, it should be pretty clear that it's not a lot at all...

[–] lyda@programming.dev 10 points 1 year ago (3 children)

The mouse pointer background is kinda a dick move. Good article. but the background is annoying for tired old eyes - which I assume are a target demographic for that article.

Wow this is awful on mobile lol

[–] lyda@programming.dev 4 points 1 year ago (1 children)

js console: document.querySelector('.pointers').hidden=true

[–] hazelnoot 2 points 1 year ago

Thank you for this! You can also get rid of it with a custom ad-blocker rule. I added these to uBlock Origin, and it totally kills the pointer thing.

wss://tonsky.me
http://tonsky.me/pointers/
https://tonsky.me/pointers/
[–] heftig 2 points 1 year ago (1 children)

You're actually seeing mouse pointers of other people having the page open. It connects to a websocket endpoint including the page URL and your platform (OS) and sends your current mouse position every second.

[–] lyda@programming.dev 6 points 1 year ago

Just because you can do something...

[–] TehPers 10 points 1 year ago (1 children)

The only modern language that gets it right is Swift:

print("🤦🏼‍♂️".count)
// => 1

Minor, but I'm not sure this is as unambiguous as the article claims. It's true that for someone "that isn’t burdened with computer internals" that this is the most obvious "length" of the string, but programmers are by definition burdened with computer internals. That's not to say the length shouldn't be 1 though, it's more that the "length" field/property has a terrible name, and asking for the length of a string is a very ambiguous question to begin with.

Instead, I think a better solution is to be clear what length you're actually referring to. For example, with Rust, the .len() method documents itself as the number of bytes in the string and warns that it may not be what you're interested in. Similarly, .chars() clarifies that it iterates over Unicode Scalar Values, and not grapheme clusters (and that grapheme clusters are unfortunately not handled by the standard library).

For most high level applications, I think you generally do want to work with grapheme clusters, and what Swift does makes sense (assuming you can also iterate over the individual bytes somehow for low level operations). As long as it is clearly documented what your "length" refers to, and assuming the other lengths can be calculated, I think any reasonably useful length is valid.

The article they link in that section does cover a lot of the nuances between them, and is a great read for more discussion around what the length should be.

Edit: I should also add that Korean, for example, adds some additional complexity to it. For example, what's the string length of 각? Is it 1, because it visually consumes a single "space"? Or is it 3 because it's 3 letters (ㄱ, ㅏ, ㄱ)? Swift says the length is 1.

[–] neutron@thelemmy.club 3 points 1 year ago

If we're being really pedantic, the last part in Korean is counted with different units:

  • 각 as precomposed character: 1자 (unit ja for CJK characters)
  • 각 (ㄱㅏㄱ) as decomposable components: 3자모 (unit jamo for Hangul components)

So we could have separate implementations of length() where we count such cases with different criteria... But I wouldn't expect non-speakers of Korean know all of this.

Plus, what about Chinese characters? Are we supposed to count 人 as one but 仁 as one (character) or two (radicals)? It gets only more complicated.

[–] zquestz@lemm.ee 9 points 1 year ago

Was actually a great read. I didn't realize there were so many ways to encode the same character. TIL.

[–] lucas@startrek.website 8 points 1 year ago (2 children)

currency symbols other than the $ (kind of tells you who invented computers, doesn’t it?)

Who wants to tell the author that not everything was invented in the US? (And computers certainly weren't)

[–] Deebster@lemmyrs.org 4 points 1 year ago* (last edited 1 year ago)

The stupid thing is, all the author had to do was write "kind of tells you who invented ASCII" and he'd have been 100% right in his logic and history.

[–] SnowdenHeroOfOurTime@unilem.org 3 points 1 year ago (2 children)

Where were computers invented in your mind? You could define computer multiple ways but some of the early things we called computers were indeed invented in the US, at MIT in at least one case.

[–] lucas@startrek.website 5 points 1 year ago (1 children)

Well, it's not really clear-cut, which is part of my point, but probably the 2 most significant people I could think of would be Babbage and Turing, both of whom were English. Definitely could make arguments about what is or isn't considered a 'computer', to the point where it's fuzzy, but regardless of how you look at it, 'computers were invented in America' is rather a stretch.

[–] SnowdenHeroOfOurTime@unilem.org 1 points 1 year ago (1 children)

'computers were invented in America' is rather a stretch.

Which is why no one said that. I read most of the article and I'm still not sure what you were annoyed about. I didn't see anything US-centric, or even anglocentric really.

[–] lucas@startrek.website 6 points 1 year ago

To say I'm annoyed would be very much overstating it, just a (very minor) eye-roll at one small line in a generally very good article. Just the bit quoted:

currency symbols other than the $ (kind of tells you who invented computers, doesn’t it?)

So they could also be attributing it to some other country that uses $ for their currency, which is a few, but it seems most likely to be suggesting USD.

[–] Deebster@lemmyrs.org 3 points 1 year ago

I think the author's intended implication is absolutely that it's a dollar because the USA invented the computer. The two problems I have is that:

  1. He's talking about the American Standard Code for Information Interchange, not computers at that point
  2. Brits or Germans invented the computer (although I can't deny that most of today's commercial computers trace back to the US)

It's just a lazy bit of thinking in an otherwise excellent and internationally-minded article and so it stuck out to me too.

[–] amio@kbin.social 7 points 1 year ago (1 children)

Holy Jesus, what a color scheme.

[–] Nighed@sffa.community 2 points 1 year ago

I prefer it to black on white. Inferior to dark mode though.

[–] onlinepersona@programming.dev 6 points 1 year ago

Because strings are such a huge problem nowadays, every single software developer needs to know the internals of them. I can't even stress it enough, strings are such a burden nowadays that if you don't know how to encode and decode one, you're beyond fucked. It'll make programming so difficult - no even worse, nigh impossible! Only those who know about unicode will be able to write any meaningful code.

[–] robinm@programming.dev 5 points 1 year ago* (last edited 1 year ago) (1 children)

I do understant why old unicode versions re-used “i” and “I” for turkish lowercase dotted i and turkish uppercase dotless I, but I don't understand why more recent version have not introduce two new characters that looks exactly the same but who don't require locale-dependant knowlege to do something as basic as “to lowercase”.

[–] chinpokomon@lemmy.ml 1 points 1 year ago

Probably for the same reason Spanish used to consider ch, ll, and rr as a single character.

[–] LaggyKar@programming.dev 5 points 1 year ago

If you go to the page without the trailing slash, the images don't load

[–] zlatko@programming.dev 2 points 1 year ago

The article sure mentions 💩a lot.

[–] phoenixz@lemmy.ca 1 points 1 year ago

Just give me plain UTF32 with ~@4 billion code points, that really should be enough for any symbol ee can come up with. Give everything it's own code point, no bullshit with combined glyphs that make text processing a nightmare. I need to be able to do a strlen either on byte length or amount of characters without the CPU spendings minute to count each individual character.

I think Unicode started as a great idea and the kind of blubbered into aimless "everybody kinda does what everyone wants" territory. Unicode is for humans, sure, but we shouldn't forget that computers actually have to do the work