this post was submitted on 01 Oct 2023
36 points (100.0% liked)

TechTakes

41 readers
22 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 

After several months of reflection, I’ve come to only one conclusion: a cryptographically secure, decentralized ledger is the only solution to making AI safer.

Quelle surprise

There also needs to be an incentive to contribute training data. People should be rewarded when they choose to contribute their data (DeSo is doing this) and even more so for labeling their data.

Get pennies for enabling the systems that will put you out of work. Sounds like a great deal!

All of this may sound a little ridiculous but it’s not. In fact, the work has already begun by the former CTO of OpenSea.

I dunno, that does make it sound ridiculous.

top 20 comments
sorted by: hot top controversial new old
[–] Hundun 23 points 1 year ago (1 children)
[–] Evinceo@awful.systems 5 points 1 year ago

Should be the header image or sidebar or whatever for this lemmy.

[–] gerikson@awful.systems 16 points 1 year ago (1 children)

This comment from the HN discussion is too funny

https://news.ycombinator.com/item?id=37725746

The number of AI safety sessions I’ve joined where the speakers have no real AI experience talking about potentially bad futures, based on zero CS experience and little ‘evidence’ beyond existing sci-fi books and anecdotes, have left me very jaded on the subject as a ‘discipline’.

[–] froztbyte@awful.systems 6 points 1 year ago (1 children)

"who needs to listen to the poet/writers/painters/sculptors/.... anyway? they're just there to make things that look good in my palazzo garden!"

[–] 200fifty@awful.systems 5 points 1 year ago

Yes, there is a lot of bunk AI safety discussions. But there are legitimate concerns as well.

Hey, don't worry, someone's standing up for--

AI is close to human level.

Uh, never mind

[–] zogwarg@awful.systems 15 points 1 year ago

~~Brawndo~~ Blockchain has got what ~~plants~~ LLMs crave, it's got ~~electrolytes~~ ledgers.

[–] swlabr@awful.systems 15 points 1 year ago* (last edited 1 year ago) (1 children)

Thinking otherwise is dumb (and, remember, that’s not you!).

Guy goes through so much effort to mash AI and blockchain together and he decides to be lazy here lol

Also, if believing this stuff is for smart people then just call me a fucking idiot.

[–] froztbyte@awful.systems 11 points 1 year ago

I actually get the impression that was letting the mask slip for a moment: “remember, they must at all times think you’re clever! Never let them think otherwise!”

[–] self@awful.systems 14 points 1 year ago (1 children)

this thing was a fucking slog of bad ideas and annoying memes so I skimmed it, but other than the obvious (blockchains are way too inefficient for any of this to come even close to working), did the author even mention validation? cause if I’m contributing my own data and my own tags then I’m definitely going to either insert a whole bunch of nonsense into the model to get paid, or use an adversarial model to generate malicious data and weights and use that to break the model. the way crypto shitheads fix this is with a centralized oracle, which means all of this is just a pointless exercise in combining the two most environmentally wasteful types of grift tech ever invented

[–] gerikson@awful.systems 10 points 1 year ago

Considering the mix of bad ideas, I suppose the AI model will check itself. Maybe by mining.

[–] froztbyte@awful.systems 11 points 1 year ago* (last edited 1 year ago)

This article is an incredibly deep mine of bad takes, wow

Let’s just build a data intensive application on a foundation where we have none of that data nearby to start! Let’s forget about all other prior distribution models! Let’s make faulty rationalists* and leaps of assumptions!

And then you click through the profile:

I’ve spent a decade working in fintech at AIG, the Commonwealth Bank of Australia, Goldman Sachs, Fast, and Affirm in roles spanning data science, machine learning, software, data engineering, credit, and fraud.

Ah, he’s angling to get a priesthood role if this new faith thing happens. Got it.

*e: this was meant to be “rationalisations” and then my phone keyboard did a dumb. I’m gonna leave it that way because it’s funnier

[–] maol@awful.systems 8 points 1 year ago

I don't care what bad tech bingo card you're using, this has to be bingo.

[–] maol@awful.systems 6 points 1 year ago

Get pennies for enabling the systems that will put you out of work. Sounds like a great deal!

I still don't understand why these people felt that art was something that had to be automated. I suppose people must have felt the same way when the printing press was invented, but while AI is quicker for the end user it requires significantly more in terms of energy and resources.

If you're into blockchain, AI is somewhere where blockchain would obviously be a huge boon. It's only in the real world where it's a laughable combination.

[–] ABoxOfNeurons@lemmy.one 3 points 1 year ago (4 children)

After going in suspicious, this actually sounds like a pretty decent idea.

The technology isn't stopping or going away any more than the cotton gin did. May as well put control in as many hands as possible. The alternative is putting it under the sole control of a few megacorps, which seems worse. Is there another option I'm not seeing?

[–] self@awful.systems 12 points 1 year ago
[–] froztbyte@awful.systems 10 points 1 year ago

Yeah seriously, you should look into VC work as your next career. You have that rare blend of wilful blindness and optimistic gullibility which seem to be job requirements there, you’ll do great!

[–] raktheundead@fedia.io 8 points 1 year ago* (last edited 1 year ago)

Not using blockchains, for a start. Blockchains centralise by design, because of economies of scale.

[–] gerikson@awful.systems 7 points 1 year ago* (last edited 1 year ago)

Everyone puts legal limits in place to prevent LLMs from ingesting their content, the ones who break those limits are prosecuted to the full extent of the law, the whole thing collapses in a downward spiral, and everyone pretends it never happened.

Well, a man can dream.

[–] feistel@sns.feistel.party 2 points 1 year ago

People don't have data, they have blog posts, vacation photos, and text messages. Talking about personal data as an abstract thing leads to error.