this post was submitted on 09 Dec 2023
147 points (100.0% liked)

Linux

1259 readers
92 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
 

At Open Source Summit Japan, Linux and Git creator Linus Torvalds talked about Rust in Linux, Linux maintainer fatigue, and AI's future role in Linux and open-source development.

top 19 comments
sorted by: hot top controversial new old
[–] 1984@lemmy.today 49 points 11 months ago (1 children)

Looking ahead, Hohndel said, we must talk about "artificial intelligence large language models (LLM). I typically say artificial intelligence is autocorrect on steroids. Because all a large language model does is it predicts what's the most likely next word that you're going to use, and then it extrapolates from there, so not really very intelligent, but obviously, the impact that it has on our lives and the reality we live in is significant.

Exactly.

[–] Lmaydev@programming.dev 6 points 11 months ago (2 children)

It is very intelligent though.

It's not simple to come up with coherent statements on such a wide variety of tasks.

It's not just stringing random words together like predictive text. It understands context in a way that is very complex.

It is more knowledgeable than the average person by a huge amount.

For example I asked it to write songs about squidmas, an imaginary holiday I made up to irritate my children. It was able to rewrite Christmas songs but with a squid theme. That's way more complex than predictive text.

[–] thatsnothowyoudoit@lemmy.ca 33 points 11 months ago* (last edited 11 months ago) (1 children)

You’re conferring a level of agency where none exists.

It appears to “understand.” It appears to be “knowledgeable. “

But LLMs do neither of those things.

Take this note from an OpenAI dev:

It’s that these models have leveraged so much data they’ve been able to map out relationships between words (or images) in way as to be able to generate what seem like new versions of those things.

I grant you that an LLM has more base level knowledge than any one human, but again this is thanks to terrifyingly large dataset and a design that means it can access this data reasonably reliably.

But it is still a prediction model. It just has more context, better design and (most importantly) data to make predictions at a level never before seen.

If you’ve ever had a chance to play with a model at level where you can control some of its basic parameters it offers a glimpse into just how much of a prediction machine it can be.

My favourite game for a while was to give midjourney a wildly vague prompt but crank the chaos up to 100 (literally the chaos flag at the highest level) to see what kind of wild connections exist but are being filtered out during “normal” use.

The same with the GPT-3.5 API in the “early days” - you could return multiple versions of the response and see the sausage being made to a very small degree.

It doesn’t take away from the sense of magic using these tools. It just helps frame what’s going on under the hood.

[–] Lmaydev@programming.dev 2 points 11 months ago (1 children)

Given it's an artificial intelligence it stands to reason its understanding and knowledge are artificial.

I don't think there's any relevance pointing that out anymore. No one thinks it's conscious or a general AI.

I also don't see how it's massively different to our ability to parse and output text tbh.

[–] Swedneck@discuss.tchncs.de 11 points 11 months ago (3 children)

it's different to our ability because we actually know what words are, we know they refer to things.

All an LLM sees is tokens, it has absolutely no concept of what langauge actually is or what things mean, it's literally just "this number seems to occur after these numbers".

[–] 0ops@lemm.ee 5 points 11 months ago* (last edited 11 months ago)

That's kind of a given though. It's a large language model, so of course its "understanding" can only be in terms of language. In a way, words are its only sense (input), and only way to interact with the world (output). The mechanism isn't really important, imo, since we could reduce our own understanding to chemical reactions.

Homo sapiens have many more dimensions of awareness, dozens maybe including sight, hearing, time, pressure, acceleration, etc., and we've been collecting data from them all 24/7 since embryo, plus instinct (pre-baked weights) from millions of years of evolution. We know that people born without a sense, let's say vision cannot conceptualize visually, even when their sight is restored for a time. I remember reading awhile back that a person born blind had their vision fixed, but they didn't know what "pointy" looked like. They couldn't know. Do they have a lower quality understanding of a word?

My point being, I don't think it's fair to objectively compare understanding between a person and a model without a testable definition of that word. Imo, and feel free to disagree, understanding is no different than merely knowing, it's just implied that the knowledge is deeper, across multiple dimensions of awareness, including subconscious awareness of our own hormones.

[–] v_krishna@lemmy.ml 3 points 11 months ago* (last edited 11 months ago)

I think that is overly simplistic. Embeddings used for LLMs do definitely include a concept of what things mean and the relationship of things to other things.

E.g., compare the embeddings of Paris, Athens, and London to other cities and they will have small cosine distance between them. Compare France, Greece, and England and same. Then very interestingly, look at Paris - France, Athens - Greece, London - England and you'll find the resulting vectors all align (fundamentally the vector operation seems to account for the relationship "is the capital of"). Then go a step further, compare those vector to Paris - US, Athens - US, London - Canada. You'll see the previous set are not aligned with these nearly as much but these are aligned with each other (relationship being something like "is a smaller city in this countrry, named after a famous city in some other country")

The way attention works there is a whole bunch of semantic meaning baked into embeddings, and by comparing embeddings you can get to pragmatic meaning as well.

[–] java 1 points 11 months ago* (last edited 11 months ago)

it’s different to our ability because we actually know what words are, we know they refer to things.

After reading this, one might be thinking that we know how our brains work, and how we "know" or "think". But we don't. You aren't comparing exact mechanisms in your post, hence I don't think this comparison is correct.

[–] 1984@lemmy.today 16 points 11 months ago (3 children)

It's not intelligent because it's not thinking.

At least my definition of intelligence is thinking. Otherwise a simple pattern matching algorithm like a regexp is also intelligent, or a sorting algorithm that puts things in the right order.

But I agree it's very efficient and has more data than any single person ever could. It's a computer, they are great at storing and processing information.

[–] Lmaydev@programming.dev 2 points 11 months ago

You could say it's artificially intelligent haha

[–] Coldus12@reddthat.com 1 points 11 months ago

While I mostly agree, I'd like to point out that GOFAI (good old fashioned AI) exists, and at its core it is basically just pathfinding like a* or something similar. And we still call that AI, because it "intelligently" finds a path quickly.

So my main point is that I agree that it isn't magic or sapient or anything, but in a sense it is definitely intelligent.

[–] notonReddit@lemmy.dbzer0.com 1 points 11 months ago

Lol get a load of this fool

[–] wiki_me@lemmy.ml 17 points 11 months ago (2 children)

That said, Torvalds continued, "Rust has not really shown itself as the next great big thing. But I think during next year, we'll actually be starting to integrate drivers and some even major subsystems that are starting to use it actively. So it's one of those things that is going to take years before it's a big part of the kernel. But it's certainly shaping up to be one of those."

I don't know about that, languages which are based on standards (c++ , javascript, c) seem to have much better enduring popularity, i don't want to see rust becoming less and less popular which will lead to less available developers (like what is happening with ruby).

[–] TheFriendlyArtificer 4 points 11 months ago (1 children)

Speaking as a non Rustacean, I'm pretty okay with it becoming more integrated.

It's safe, performant, and isn't any more difficult to pick up than C++. C has a weird aura about it that makes it seem intimidating despite the fact that it is the simplest language (macros notwithstanding) that I've ever used.

Based on Google's recent track record of mind-boggling incompetence on all fronts, I want Go kept as far away from core functionality as humanly possible. This leaves either adding more cruft to an already ungainly C++, continuing to use Boost (another Google product) with C, or to pivot to a more modern language.

[–] caseyweederman@lemmy.ca 3 points 11 months ago (1 children)

Agreed re: Google.
I dunno what the solution is. The world without Google is going to be a very different place. Do you think it's even possible for them to turn things around?

[–] TheFriendlyArtificer 2 points 11 months ago (1 children)

I think it would take a pretty major sea change for them. They technically split up into Alphabet, but I don't know of a single person that actually uses that when describing them.

Even if they did change things around, and I would wager that the entrenched bureaucracy will make that impossible, their name is toxic to a lot of tech nerds. We may be a minority, but we talk and people listen. Even the non techies in my life know that they can't maintain a simple messaging app, responded to (rightful!) concerns about data loss by locking the support threads, and has jacked up the price of YouTube on a yearly basis.

They've spectacularly failed at video game consoles, social media, banking/credit cards, IOT, messaging, video, and can't even maintain a semblance of consistency in their office suite. At work I have three different ways to receive instant messages, and it's a crapshoot as to which one a coworker will use.

And let's not even get into how absolutely useless their search is now that everything has been gamed by SEO. Duckduckgo has been my default for years, but now it's consistently returning better results than big G.

If they managed to correct course tomorrow, it would take multiple years for me to even begin to trust them again.

[–] caseyweederman@lemmy.ca 1 points 11 months ago

Yeah. Extremely unlikely and probably impossible.
It's incredible how very much they have been able to fail but still continue operating.

[–] java 2 points 11 months ago* (last edited 11 months ago)

We don't know what will happen in the future. But just above the quote that you brought he explains that Rust is actually bringing new developers.

Hohndel commented that the aging of the kernel community is a "double-edged sword." Torvalds agreed, but he noted that "one of the things I liked about the Rust side of the kernel, was that there was one maintainer who was clearly much younger than most of the maintainers. We can clearly see that certain areas in the kernel bring in more young people." For example, on the driver side, you'll have a much easier time finding younger people, and that is traditionally how we've grown a lot of maintainers, including Greg [Korah-Hartman, the Linux stable kernel maintainer].

Hohndel and Torvalds also talked about the use of the Rust language in the Linux kernel. Torvalds said, "It's been growing, but we don't have any part of the kernel that really depends on Rust yet. To me, Rust was one of those things that made technical sense, but to me personally, even more important was that we need to not stagnate as a kernel and as developers."

It will take a lot of time for Rust to play a key role. And it won't happen without enough Rust developers joining the project in the upcoming years. That itself could motivate more people to learn it, producing self-reinforcing feedback.

[–] java 5 points 11 months ago* (last edited 11 months ago)

This is a great thing to read. I also have to thank the author for mostly using quotes in this article without needless filler-paragraphs between them, it's straight to the point.