this post was submitted on 01 Oct 2023
57 points (100.0% liked)
Technology
37741 readers
65 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
We are nowhere near AI writing our software unattended. Not even close. People really over estimate the state of AI.
I'm an AI nerd and yes, nowhere close. AI can write code snippets pretty well, and that'll get better with time, but a huge part of software development is translating client demands into something sane and actionable. If a CEO of a 1-man billion dollar company asks his super-AI to "build the next Twitter", that leaves so many questions on the table that the result will be completely unpredictable. Humans have preferences and experiences which can inform and fill in those implicit questions. They're generally much better suited as tools and copilots than autonomous entities.
Now, there was a paper that instantiated a couple dozen LLMs and had them run a virtual software dev company together which got pretty good results, but I wouldn't trust that without a lot more research. I've found individual LLMs with a given task tend to get tunnel vision, so they could easily get stuck in a loop trying the same wrong code or design repeatedly.
(I think this was the paper, reminiscent of the generative agent simulacra paper, but I also found this)
Dude, you need to take a closer look at that paper you linked, if you consider that "pretty good results". They have a github repo with screenshots of some of the "products", which should give you some idea https://github.com/OpenBMB/ChatDev/tree/main/misc .
Not to mention the terrible decision making of the fake company (desktop app you have to download? no web/mobile version? for a virtual board game?)
(Also the paper never even tried to prove its main hypothesis, that all this multi agent song and dance would somehow reduce hallucinations and improve performance. There is a lot of good AI stuff coming out daily, but that particular paper - and the articles reporting on it - was pure garbage.)
True, as of today. On the other hand, future advancements could very easily change that. On the other other hand, people have been saying the same about self driving cars 10 years ago, and while they do basically work, and are coming eventually, progress there has been a lot slower than predicted.
So who knows. Could go either way.
It’s almost a philosophical question of whether I can replace us though. Because for it to be anything more than a tool it needs real intelligence, compassion, etc. Basically it would need a conscious.
I’m certain it’ll replace some jobs without that, just because being a tool it’ll make us more efficient and that efficiency will eliminate jobs. But I’m not seeing it replace or assimilate entire industries at this stage.
Yea…anyone who has asked chatgpt to help them fix a piece of code or write one would know it requires a lot of human editing/good prompting. And a lot of time, what I was trying to accomplish still wouldn’t work.
True. But if AI makes people more productive it could make it really hard to find work. Especially if you're straight out of college with zero experience.
finding job as a junior is already a bit harder than it was because so many developers are working remote, which is way harder to do when you are a junior developer.