this post was submitted on 22 May 2023
3 points (100.0% liked)

Futurology

117 readers
1 users here now

founded 4 years ago
MODERATORS
top 6 comments
sorted by: hot top controversial new old
[–] shreddy_scientist@lemmy.ml 2 points 1 year ago (1 children)

Another reason www.perplexity.ai is soo much better. Not only has it always provided sources, it now has filters (like academic) plus utilizes GPT-4 after a recent update!

[–] Veritas@lemmy.ml 7 points 1 year ago (1 children)

Uses GPT-4 always or only if you provide an API key?

[–] shreddy_scientist@lemmy.ml 3 points 1 year ago

To use GPT-4, or Copilot as they call it, requies making an account. They're cool with alias emails at least. The bottom left of the search field is the filters, which is set to All by default but I prefer Academic. The bottom right of the search field is where you can sign-in and enable GPT-4/Copilot. But no matter the setup sources are always provided, which is phenomenal, especially for academic based info!

[–] pancake@lemmy.ml 1 points 1 year ago* (last edited 1 year ago) (1 children)

"LLMs don't understand what they say, they just try to sound like they do" is a sentence that denotes a good understanding of how AI works, as expected from an "expert on AI". However, it makes a comparison with human intelligence that either assumes we know how it works, or shows a fundamental misunderstanding of how it works. For all we know, either our brain is a mystery (and thus we can't really state whether an AI "understands" anything, since we can't even define what that means), or, as research on neurobiology seems to indicate, it's just large-scale deep learning, with more ad-hockery for evolutionary stuff, and two orders of magnitude more energy-efficient.

[–] arghya_333@lemmy.ml 0 points 1 year ago (1 children)

True. Anyone who has studied AI to a basic degree will know that any form of AI we have is far from sentient.

[–] pancake@lemmy.ml 1 points 1 year ago

To be clear, I'm not trying to say that AI is sentient, as I do not believe that. My point is that, as far as understanding is concerned, we don't really know enough about the working of either AI or ourselves to really make a distinction between their "understanding" of concepts and ours. We don't know how ChatGPT works, and we don't know how our brain works, so stating that there is any fundamental difference between the way both work is no more correct than any random statement, especially when the concept itself (understanding) isn't even formally defined.