this post was submitted on 15 May 2024
331 points (100.0% liked)

TechTakes

42 readers
18 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] Amoeba_Girl@awful.systems 89 points 8 months ago (2 children)

From Re-evaluating GPT-4’s bar exam performance (linked in the article):

First, although GPT-4’s UBE score nears the 90th percentile when examining approximate conversions from February administrations of the Illinois Bar Exam, these estimates are heavily skewed towards repeat test-takers who failed the July administration and score significantly lower than the general test-taking population.

Ohhh, that is sneaky!

[–] Amoeba_Girl@awful.systems 64 points 8 months ago (40 children)

What I find delightful about this is that I already wasn't impressed! Because, as the paper goes on to say

Moreover, although the UBE is a closed-book exam for humans, GPT-4’s huge training corpus largely distilled in its parameters means that it can effectively take the UBE “open-book”

And here I was thinking it not getting a perfect score on multiple-choice questions was already damning. But apparently it doesn't even get a particularly good score!

[–] ebu@awful.systems 20 points 8 months ago (26 children)

[...W]hen examining only those who passed the exam (i.e. licensed or license-pending attorneys), GPT-4’s performance is estimated to drop to 48th percentile overall, and 15th percentile on essays.

officially Not The Worst™, so clearly AI is going to take over law and governments any day now

also. what the hell is going on in that other reply thread. just a parade of people incorrecting each other going "LLM's don't work like [bad analogy], they work like [even worse analogy]". did we hit too many buzzwords?

[–] Amoeba_Girl@awful.systems 17 points 8 months ago* (last edited 8 months ago)

"Nooo you don't get it, LLMs are supposed to be shit"

[–] genuineparts@infosec.pub 15 points 8 months ago (1 children)

But LLM’s don’t work like Typewriters, they work like Microwaves!

[–] froztbyte@awful.systems 12 points 8 months ago* (last edited 8 months ago)

oh is that how come I get so much popcorn around these discussions? 🤔 makes sense when you think about it!

load more comments (24 replies)
load more comments (39 replies)
load more comments (1 replies)
[–] MajorHavoc@programming.dev 46 points 8 months ago (1 children)

AI being pushed by scam artists...Gee. Who could have guessed?

[–] FiniteBanjo@lemmy.today 12 points 8 months ago

I did. I guessed. I expressed skepticism when that headline first appeared.

[–] skillissuer@discuss.tchncs.de 27 points 8 months ago (1 children)

the perils of hitting /all

[–] dgerard@awful.systems 18 points 8 months ago (2 children)

416 updoots, what on earth

[–] skillissuer@discuss.tchncs.de 14 points 8 months ago

dj khaleb suffering from success dot jpeg

load more comments (1 replies)
[–] vin@lemmynsfw.com 24 points 8 months ago (1 children)

Though making an unreliable intern is amazing and was impossible 5 years ago...

[–] self@awful.systems 22 points 8 months ago

thank fuck sama invented the concept of doing a shit job

[–] HawlSera@lemm.ee 16 points 8 months ago (21 children)

It's almost like we can't make a machine conscious until we know what makes a human conscious, and it's obvious Emergentism is bullshit because making machines smarter doesn't make them conscious

Time to start listening to Roger Penrose's Orch-OR theory as the evidence piles up - https://pubs.acs.org/doi/10.1021/acs.jpcb.3c07936

[–] blakestacey@awful.systems 29 points 8 months ago* (last edited 8 months ago) (16 children)

The given link contains exactly zero evidence in favor of Orchestrated Objective Reduction — "something interesting observed in vitro using UV spectroscopy" is a far cry from anything having biological relevance, let alone significance for understanding consciousness. And it's not like Orch-OR deserves the lofty label of theory, anyway; it's an ill-defined, under-specified, ad hoc proposal to throw out quantum mechanics and replace it with something else.

The fact that programs built to do spicy autocomplete turn out to do spicy autocomplete has, as far as I can tell, zero implications for any theory of consciousness one way or the other.

load more comments (16 replies)
load more comments (20 replies)
[–] walter_wiggles@lemmy.nz 15 points 8 months ago (1 children)

I asked AI to summarize the article since it's paywalled. It didn't say anything about lying, should I trust it?

[–] dgerard@awful.systems 20 points 8 months ago

As a large language model, absolutely

load more comments
view more: next ›