this post was submitted on 02 Mar 2025
3 points (100.0% liked)

Programming

2 readers
1 users here now

A magazine created for the discussion of computer programming-related topics.

Rules

Please keep submissions on topic and of high quality. No image posts, no memes, no politics. Keep the magazine focused on programming topics not general computing topics. Direct links to app demos (unrelated to programming) will be removed. No surveys.

founded 2 years ago
MODERATORS
 

This article nails it. Just because LLMs don’t deliver flawless code, that doesn’t mean you shouldn’t use their help. That seems completely short-sighted to me. Just don’t rely on just one. And don’t expect that they will provide you the exact solution to your prompt at the first attempt. Exactly as it often happens when collaborating with another human being.

“Just because code looks good and runs without errors doesn’t mean it’s actually doing the right thing. No amount of meticulous code review—or even comprehensive automated tests—will demonstrably prove that code actually does the right thing. You have to run it yourself!

Proving to yourself that the code works is your job. This is one of the many reasons I don’t think LLMs are going to put software professionals out of work.

LLM code will usually look fantastic: good variable names, convincing comments, clear type annotations and a logical structure. This can lull you into a false sense of security, in the same way that a gramatically correct and confident answer from ChatGPT might tempt you to skip fact checking or applying a skeptical eye.

The way to avoid those problems is the same as how you avoid problems in code by other humans that you are reviewing, or code that you’ve written yourself: you need to actively exercise that code. You need to have great manual QA skills.

A general rule for programming is that you should nevertrust any piece of code until you’ve seen it work with your own eye—or, even better, seen it fail and then fixed it.”

https://lnkd.in/dVV7knTD
#AI #GenerativeAI #SoftwareDevelopment #Programming #PromptEngineering #LLMs #Chatbots

top 6 comments
sorted by: hot top controversial new old
[–] joachim@drupal.community 2 points 1 week ago (1 children)

@remixtures@tldr.nettime.org Oh fuck off. Why the fuck should we use these bullshit generators only to have to fine toothcomb the garbage they produce? #FuckAI

[–] remixtures@tldr.nettime.org 2 points 1 week ago* (last edited 1 week ago) (1 children)

@joachim: You have every right to not use LLMs. Personally, I find them a great help for improving my productivity. Every person has its own reasons for using or not using generative AI. Nevertheless, I'm afraid that this technology - like many other productivity-increasing technologies - will become a matter of fact in our daily lifes. The issue here is how best to adapt it to our own advantage.Open-source LLMs should be preferred, of course. But I don't think that mere stubbornness is a very good strategy to deal with new technology.

"If we don’t use AI, we might be replaced by someone who will. What company would prefer a tech writer who fixes 5 bugs by hand to one who fixes 25 bugs using AI in the same timeframe, with a “good enough” quality level? We’ve already seen how DeepSeek AI, considered on par with ChatGPT’s quality, almost displaced more expensive models overnight due to the dramatically reduced cost. What company wouldn’t jump at this chance if the cost per doc bug could be reduced from $20 to $1 through AI? Doing tasks more manually might be a matter of intellectual pride, but we’ll be extinct unless we evolve."

https://idratherbewriting.com/blog/recursive-self-improvement-complex-tasks

[–] joachim@drupal.community 1 points 1 week ago (1 children)

@remixtures@tldr.nettime.org Climate change, you fuckhead.

[–] remixtures@tldr.nettime.org 2 points 1 week ago

@joachim@drupal.community Just because Silicon Valley companies over-engineer their models, that doesn't mean it must be necessarily so... Look at DeepSeek: https://github.com/deepseek-ai/open-infra-index/blob/main/202502OpenSourceWeek/day_6_one_more_thing_deepseekV3R1_inference_system_overview.md

[–] DevWouter@mastodon.social 2 points 1 week ago (1 children)

@remixtures@tldr.nettime.org

I would argue against this argument since it’s proven that the usage of LLMs result in the author being less critical and thus more likely to accept broken code compared to writing it and then reviewing it.

The article below explains it better but the short version: Usage of AI decreases your ability to think and reason.

And I consider those skills vital for any programmer.

[–] remixtures@tldr.nettime.org 1 points 1 week ago

@DevWouter@mastodon.social If you only use just one AI model all the time for all tasks and never question its outputs, of course your critical thinking skills will likely not be very high. I guess this study is a bit flawed in the sense that it's probably looking for something that it's already there from the beginning. People who have less critical thinking skills tend to trust more in the outputs generated by one model. Personally, I tend to use at least three LLMs. Why? Because I always like to have the opinion of more than one voice. But these are critical thinking skills that can be taught.