this post was submitted on 10 Aug 2023
207 points (100.0% liked)

Asklemmy

1457 readers
50 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS
 

Just out of curiosity. I have no moral stance on it, if a tool works for you I'm definitely not judging anyone for using it. Do whatever you can to get your work done!

top 50 comments
sorted by: hot top controversial new old
[–] Atramentous@lemm.ee 70 points 1 year ago (4 children)

High school history teacher here. It’s changed how I do assessments. I’ve used it to rewrite all of the multiple choice/short answer assessments that I do. Being able to quickly create different versions of an assessment has helped me limit instances of cheating, but also to quickly create modified versions for students who require that (due to IEPs or whatever).

The cool thing that I’ve been using it for is to create different types of assessments that I simply didn’t have the time or resources to create myself. For instance, I’ll have it generate a writing passage making a historical argument, but I’ll have AI make the argument inaccurate or incorrectly use evidence, etc. The students have to refute, support, or modify the passage.

Due to the risk of inaccuracies and hallucination I always 100% verify any AI generated piece that I use in class. But it’s been a game changer for me in education.

[–] Atramentous@lemm.ee 30 points 1 year ago (1 children)

I should also add that I fully inform students and administrators that I’m using AI. Whenever I use an assessment that is created with AI I indicate with a little “Created with ChatGPT” tag. As a history teacher I’m a big believer in citing sources :)

[–] limeaide@lemmy.ml 7 points 1 year ago (2 children)

How has this been received?

I imagine that pretty soon using ChatGPT is going to be looked down upon like using Wikipedia as a source

[–] Atramentous@lemm.ee 8 points 1 year ago

I would never accept a student’s use of Wikipedia as a source. However, it’s a great place to go initially to get to grips with a topic quickly. Then you can start to dig into different primary and secondary sources.

Chat GPT is the same. I would never use the content it makes without verifying that content first.

[–] vagrantprodigy@lemmy.whynotdrs.org 5 points 1 year ago (2 children)

Is it not already? I've found it to be far less reliable than Wikipedia.

load more comments (2 replies)
[–] phillaholic@lemm.ee 16 points 1 year ago

Is it fair to give different students different wordings of the same questions? If one wording is more confusing than another could it impact their grade?

[–] jossbo@lemmy.ml 5 points 1 year ago

I'm a special education teacher and today I was tasked with writing a baseline assessment for the use of an iPad. Was expecting it to take all day. I tried starting with ChatGPT and it spat out a pretty good one. I added to it and edited it to make it more appropriate for our students, and put it in our standard format, and now I'm done, about an hour after I started.

I did lose 10 minutes to walking round the deserted college (most teachers are gone for the holidays) trying to find someone to share my joy with.

load more comments (1 replies)
[–] flynnguy@programming.dev 50 points 1 year ago (4 children)

I had a coworker come to me with an "issue" he learned about. It was wrong and it wasn't really an issue and the it came out that he got it from ChatGPT and didn't really know what he was talking about, nor could he cite an actual source.

I've also played around with it and it's given me straight up wrong answers. I don't think it's really worth it.

It's just predictive text, it's not really AI.

[–] Echo71Niner@kbin.social 17 points 1 year ago

I concur. ChatGPT is, in fact, not an AI; rather, it operates as a predictive text tool. This is the reason behind the numerous errors it tends to generate and its lack of self-review prior to generating responses is clearest indication of it not being an AI. You can identify instances where CHATGPT provides incorrect information, you correct it, and within 5 seconds of asking again, it repeat the same inaccurate information in its response.

[–] dbilitated@aussie.zone 5 points 1 year ago

i think learning where it can actually help is a bit of an art - it's just predictive text, but it's very good predictive text - if you know what you need and get good and giving it the right input it can save a huge about of time. you're right though, it doesn't offer much if you don't already know what you need.

load more comments (2 replies)
[–] paNic@feddit.uk 47 points 1 year ago

A junior team member sent me an AI-generated sick note a few weeks ago. It was many, many neat and equally-sized paragraphs of badly written excuses. I would have accepted "I can't come in to work today because I feel unwell" but now I can't take this person quite so seriously any more.

[–] Lockely@pawb.social 24 points 1 year ago (2 children)

I've played around with it for personal amusement, but the output is straight up garbage for my purposes. I'd never use it for work. Anyone entering proprietary company information into it should get a verbal shakedown by their company's information security officer, because anything you input automatically joins their training database, and you're exposing your company to liability when, not if, OpenAI suffers another data breach.

[–] lemmyvore@feddit.nl 7 points 1 year ago

The very act of sharing company information with it can land you and the company in hot water in certain industries. Regardless if OpenAI is broken into.

load more comments (1 replies)
[–] fidodo@lemm.ee 19 points 1 year ago* (last edited 1 year ago) (2 children)

Why should anyone care? I don't go around telling people every time I use stack overflow. Gotta keep in mind gpt makes shit up half the time so I of course test and cross reference everything but it's great for narrowing your search space.

[–] akulium@feddit.de 9 points 1 year ago (8 children)

I did some programming assignments in a group of two. Every time, my partner sent me his code without further explanation and let me check his solution.

The first time, his code was really good and better than I could have come up with, but there was a small obvious mistake in there. The second time his code to do the same thing was awful and wrong. I asked him whether he used ChatGPT and he admitted it. I did the rest of the assignments alone.

I think it is fine to use ChatGPT if you know what you are doing, but if you don't know what you are doing and try to hide it with ChatGPT, then people will find out. In that case you should discuss with the people you are working with before you waste their time.

load more comments (8 replies)
[–] boatswain@infosec.pub 3 points 1 year ago (1 children)

The problem with using it is that you might be sending company proprietary or sensitive information to a third party that's going to mine that information and potentially expose that information, either directly or by being hacked. For example, this whole thing with Samsung data: https://techcrunch.com/2023/05/02/samsung-bans-use-of-generative-ai-tools-like-chatgpt-after-april-internal-data-leak/

load more comments (1 replies)
[–] bitsplease@lemmy.ml 16 points 1 year ago* (last edited 1 year ago) (2 children)

not chatGPT - but I tried using copilot for a month or two to speed up my work (backend engineer). Wound up unsubscribing and removing the plugin after not too long, because I found it had the opposite effect.

Basically instead of speeding my coding up, it slowed it down, because instead of my thought process being

  1. Think about the requirements
  2. Work out how best to achieve those requirements within the code I'm working on
  3. Write the code

It would be

  1. Think about the requirements
  2. Work out how best to achieve those requirements within the code I'm working on
  3. Start writing the code and wait for the auto complete
  4. Read the auto complete and decide if it does exactly what I want
  5. Do one of the following depending on 4 5a. Use the autocomplete as-is 5b. Use the autocomplete then modify to fix a few issues or account for a requirement it missed 5c. Ignore the autocomplete and write the code yourself

idk about you, but the first set of steps just seems like a whole lot less hassle then the second set of steps, especially since for anything that involved any business logic or internal libraries, I found myself using 5c far more often than the other two. And as a bonus, I actually fully understand all the code committed under my username, on account of actually having wrote it.

I will say though in the interest of fairness, there were a few instances where I was blown away with copilot's ability to figure out what I was trying to do and give a solution for it. Most of these times were when I was writing semi-complex DB queries (via Django's ORM), so if you're just writing a dead simple CRUD API without much complex business logic, you may find value in it, but for the most part, I found that it just increased cognitive overhead and time spent on my tickets

EDIT: I did use chatGPT for my peer reviews this year though and thought it worked really well for that sort of thing. I just put in what I liked about my coworkers and where I thought they could improve in simple english and it spat out very professional peer reviews in the format expected by the review form

load more comments (2 replies)
[–] CaptainPike 14 points 1 year ago (4 children)

I'm a DM using ChatGPT to help me build things for my DnD campaign/world and not telling my players. Does that count? I still do most of the heavy lifting but it's nice to be able to brainstorm and get ideas bounced back. I don't exactly have friends to do that with.

[–] boatswain@infosec.pub 4 points 1 year ago (1 children)

I do the same thing; it's been great. ChatGPT is often problematic in other scenarios because it will sometimes just make stuff up, but that's nothing but a positive for brainstorming D&D plots. I did tell my players though.

load more comments (1 replies)
load more comments (3 replies)
[–] vagrantprodigy@lemmy.whynotdrs.org 13 points 1 year ago (1 children)

Some of my co-workers use it, and it's fairly obvious, usually because they are putting out even more inaccurate info than normal.

[–] Magnetar@feddit.de 7 points 1 year ago (1 children)

Not because their grammar and phrasing improved suddenly?

load more comments (1 replies)
[–] RagnarokOnline@reddthat.com 12 points 1 year ago (2 children)

Only used it a couple of times for work when researching some broad topics like data governance concepts.

It’s a good tool for learning because you can ask it about a subject and then ask it to explain the subject “as a metaphor to improve comprehension” and it does a pretty good job. Just make sure you use some outside resources to ensure you’e not being hallucinated all over.

My bosses use it to write their emails (ESL).

[–] dbilitated@aussie.zone 5 points 1 year ago

ESL is actually a great use, although there's a risk someone might not catch a hallucination/weird tone issue. Still it would be really helpful there.

load more comments (1 replies)
[–] limeaide@lemmy.ml 12 points 1 year ago

My supervisor uses ChatGPT to write emails to higher ups and it's kinda embarrassing lol. One email he's not even capitalizing or spell checking, and the next he has these emails are are over explaining simple things and are half irrelevant.

I've used it a couple times when I can't fully put into words that I'm trying to say, but I use it more for inspiration than anything. I've also used it once or twice in my personal life for translating.

[–] henfredemars@infosec.pub 12 points 1 year ago* (last edited 1 year ago) (2 children)

I use it to write performance reviews because in reality HR has already decided the results before the evaluations.

I'm not wasting my valuable time writing text that is then ignored. If you want a promotion, get a new job.

To be clear: I don't support this but it's the reality I live in.

[–] DickFiasco@lemm.ee 7 points 1 year ago (1 children)

This is exactly what I use it for. I have to write a lot of justifications for stuff like taking training, buying equipment, going on business travel, etc. - text that will never be seriously read by anyone and is just a check-the-box exercise. The quality and content of the writing is unimportant as long as it contains a few buzz-phrases.

[–] zebus@lemmy.sdf.org 6 points 1 year ago (1 children)

Just chiming in as another person who does this, it's absolutely perfect. I just copy and paste the company bs competencies, add in a few bs thoughts of my own, and tell it to churn out a full review reinforcing how they comply with the listed competencies.

It's perfect, just the kinda bs HR is looking for, I get compliments all the time for them rofl.

[–] DickFiasco@lemm.ee 4 points 1 year ago

Work smarter, not harder, lol.

load more comments (1 replies)
[–] vegai@suppo.fi 11 points 1 year ago* (last edited 1 year ago) (1 children)

I might tell them just as I might tell them I used google to find out something. Doesn't really pop up in conversation that often, but I wouldn't hide the fact. It's just almost totally irrelevant.

load more comments (1 replies)
[–] Fizz@lemmy.nz 10 points 1 year ago

When I'm pissed off I use it to make my emails sound friendly.

[–] Behaviorbabe@kbin.social 7 points 1 year ago (1 children)

Coworker of mine admitted to using this for writing treatment plans. Super unethical and unrepentant about it. Why? Treatment plans are individual, and contain PII. I used it for research a few times and it returned sources that are considered bunk at best and hated within the community for their history. So I just went back to my journal aggregation.

[–] 520@kbin.social 5 points 1 year ago

Super unethical and unrepentant about it.

Super illegal in most jurisdictions too.

I've found ChatGPT is good for small tasks that require me to code in languages I don't use often and don't know well. My prime examples are writing CMakeLists.txt files, and generating regex patterns.

Also, if I want to write a quick little bash script to do something, it's much better at remembering syntax and string handling tricks than me.

[–] WackyTabbacy42069@reddthat.com 7 points 1 year ago* (last edited 1 year ago)

I use GPT-4 daily. I worked with it to create a quick and convenient app on my smartwatch, which allows it to provide wisdom and guidance fast whenever I need it. For more grandular things, I use its BingChat interface which can search the web and see images. The AI has helped me with understanding how to complete tasks, providing counseling for me, finding bugs in my code, writing functions, teaching me how to use software like Excel and Outlook, and giving me random information about various curiosities that pop into mind.

I don't keep it a secret and tell anyone who asks. Plus it's kinda obvious that something is going on with me. I always wear bone conducting headsets that allow the AI to whisper in my ear without shutting me out to the world, and sometimes talk to my watch

The responses to knowing what I'm doing have almost always been extreme: very positive or very negative. The machine is controversial, and when some can no longer stay in comfortable denial of its efficacy they turn to speaking out against its use

Edit: just fixed its translation method. Now the watch will hear non-english speech and automatically translate it for me too (uses Whisper API)

[–] awkwardparticle@kbin.social 6 points 1 year ago

My whole team was playing around with it and for a few weeks it was working pretty well for a coupl3 of things. Until the answers started to become incorrect and not useful.

[–] Haus@kbin.social 6 points 1 year ago (2 children)

Yesterday I was working on a training PowerPoint and it occurred to me that I should probably simplify the language. Had GPT convert it to 3rd-grade language, and it worked pretty well. Not perfect, but it helped.

I'm also writing an app as a hobby and, although GPT goes batshit crazy from time to time, overall it has done most of the coding grunt-work pretty well.

load more comments (2 replies)
[–] mojo@lemm.ee 6 points 1 year ago

Yes, although there's been a huge spike in cancer diagnosis I've been giving out since doing so. Whoops!

[–] hsl@wayfarershaven.eu 5 points 1 year ago

My job actively encourages using AI to be more efficient and rewards curiosity/creative approaches. I'm in IT management.

[–] ArcticPrincess@lemmy.ml 5 points 1 year ago

A friend of mine just used it to write a script for an Amazing Race application video. It was quite good.

How the heck did it access enough source material to be able to imitate something that specific and do it well? Are we humans that predictable?

[–] brunofin@lemm.ee 5 points 1 year ago

We openly use it and abuse of it from top to bottom of the company and for me add Co-Pilot to that as well

[–] a_seattle_ian@lemmy.ml 5 points 1 year ago* (last edited 1 year ago) (4 children)

I'm interested in finding ways to use it but when if I'm writing code I really like the spectrum of different answers on stack overflow with comment's on WHY they did it that way. Might use it for boring emails though.

load more comments (4 replies)
[–] 1984@lemmy.today 4 points 1 year ago* (last edited 1 year ago)

I use it at work but gladly tell the boss... It's only pluses if we can do more trivial work faster. More time to relax. They don't watch what I do during the day. The boss relaxes also. All good.

[–] TheRealMalc 4 points 1 year ago* (last edited 1 year ago)

I dont see any reason to not use it to (keyword) help with your work. I think it would be wise to not use its responses verbatim, as well as to fact-check anything that it gives you. Additionally, turn off chat history and do not enter any details about yourself, or your employer, into the prompts. Keep things generic whenever you can.

I've run emails through it to check tone since I'm hilariously bad at reading tone through text, but I'm pretty limited in how I can make use of that. There's info I deal with that is sensitive and/or proprietary that I can't paste into just any text box without potential severe repercussions.

[–] vext01@lemmy.sdf.org 4 points 1 year ago

I use it as a search engine for the LLVM docs.

Works so much better than doxygen.

But it's no secret.

Im using the shit out of gpt-4 for coding and it works. And no never told anyone cause nobody asks.

[–] PeWu@lemmy.ml 4 points 1 year ago (1 children)

Not using chatgpt at all because it's queue is always full.

[–] CaptainPike 8 points 1 year ago (1 children)

See this confuses the hell out of me. I've NEVER been prevented from using ChatGPT by a queue. It's always saying that it's a downside to not paying for it but seems like I just always choose the times that no one is using it.

load more comments (1 replies)
[–] nueonetwo@lemmy.ca 3 points 1 year ago

I've used it a couple times to draft reports, most of what it writes is pretty garbage but it's good for generating general filter sentences and structure and stuff that I don't want to waste the time thinking about.

I've also used it to generate Facebook posts, it's awesome at this however recently I've had to make a point to telling it not to include emojis or the posts get overloaded

[–] datavoid@lemmy.ml 3 points 1 year ago (1 children)

I use it to speed up writing scripts on occasion, while attempting to abstract out any possibly confidential data.

I'm still fairly sure it's not allowed, however. But considering it would be easy to trace API calls and I haven't been approached yet, I'm assuming no one really cares.

load more comments (1 replies)
load more comments
view more: next ›