this post was submitted on 15 Jul 2023
97 points (100.0% liked)

Asklemmy

1457 readers
61 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] PM_ME_VINTAGE_30S@lemmy.sdf.org 4 points 1 year ago (1 children)

IMO the use case for ChatGPT is stuff that's not important but still tedious to write. For example, I'm applying for engineering work and my resumé "looks" like shit, so I'm going to need to write a shitload of cover letters. I don't want to write them, like literally at all. It's boring and stupid. But ChatGPT will happily write them. Sure there might be factual errors, but I'll read the output and correct errors by hand. I still save time not having to write boilerplate or structure sentences.

Also, ChatGPT can work with programming languages. For example, I had ChatGPT write me a matrix algebra class in C++ just for fun. The first iteration didn't compile, but it had the jist of how to represent a matrix and matrix multiplication. The second iteration compiled and worked on what I tried it on. Would I use it in production? Probably not while Boost exists. However, I probably could have used it to start writing a matrix algebra library if I really wanted to.

The fact that people are asking it questions and believing it is just so plain stupid.

The fact of the matter is that people are more gullible than they think. People have been encouraged to blindly trust authority figures since the dawn of civilization. We are simply reaping the consequences of our continued complacency.

It's not unreasonable to ask ChatGPT (or anyone/thing) else questions. The issue is when they are treated as all-seeing oracles. ChatGPT in particular makes for a poor search engine because it is particularly likely to output convincing-sounding lies, because it is designed to optimize the convincing-sounding-ness of outputted text.

And if i need to do research to be able if it just talks bullshit again - why bother asking it in the first place?

Well, it can point you in a direction to begin your own research. However, the main use case is really when you don't want to do the work and you don't care about the quality of the work. I don't think people fully realize that workers generally don't want to do their work (would you do your job for free?), because that would contradict the assumption that work under capitalism is natural, voluntary, and not imposed upon the world.

[–] ranok@sopuli.xyz 2 points 1 year ago

LLMs can be super useful if there is an authoritative source of truth. I wrote a Langchain app that takes my Python code, asks ChatGPT to optimize it then uses symbolic analysis to perform equivalency checking. I get to write and have clear simple python code, and then I offload optimization to a bot.