this post was submitted on 04 Sep 2023
39 points (100.0% liked)

Technology

37742 readers
74 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

Thoughts from James who recently held a Gen AI literacy workshop for older teenagers.

On risks:

One idea I had was to ask a generative model a question and fact check points in front of students, allowing them to see fact checking as part of the process. Upfront, it must be clear that while AI-generated text may be convincing, it may not be accurate.

On usage:

Generative text should not be positioned as, or used as, a tool to entirely replace tasks; that could disempower. Rather, it should be taught to be used as a creativity aid. Such a class should involve an exercise of making something.

you are viewing a single comment's thread
view the rest of the comments
[–] lvxferre@lemmy.ml 1 points 1 year ago

humans regularly “hallucinate”, it’s just not something we recognize as such. There’s neuro-atypical hallucinations, yes, but there’s also misperceptions, misunderstandings, brain farts, and “glitches” which regularly occur in healthy cognition, and we have an entire rest of the brain to prevent those.

Can you please tone down on the fallacies? Until now I've seen the following:

  • red herring - "LLMs are made of dozen of layers" (that don't contextually matter in this discussion)
  • appeal to ignorance - "they don't matter because [...] they exist as black boxes"
  • appeal to authority - "for the record, I personally know [...]" (pragmatically the same as "chrust muh kwalifikashuns")
  • inversion of the burden of proof (already mentioned)
  • faulty generalisation (addressing an example as if it addressed the claim being exemplified)

And now, the quoted excerpt shows two more:

  • moving the goalposts - it's trivial to prove that humans can be sometimes dumb. And it does not contradict the claim of the other poster.
  • equivocation - you're going out of way to label incorrect human output by the same word used to label incorrect LLM output, without showing that they're the same. (They aren't.)

Could you please show a bit more rationality? This sort of shit is at the very least disingenuous, if not worse (stupidity), it does not lead to productive discussion. Sorry to be blunt but you're just wasting the time of everyone here, this is already hitting Brandolini's Law.

I won't address the rest of your comment (there's guilt by association there BTW), or further comments showing the same lack of rationality. However I had to point this out, specially for the sake of other posters.