@remixtures@tldr.nettime.org Oh fuck off. Why the fuck should we use these bullshit generators only to have to fine toothcomb the garbage they produce? #FuckAI
Programming
A magazine created for the discussion of computer programming-related topics.
Rules
Please keep submissions on topic and of high quality. No image posts, no memes, no politics. Keep the magazine focused on programming topics not general computing topics. Direct links to app demos (unrelated to programming) will be removed. No surveys.
@joachim: You have every right to not use LLMs. Personally, I find them a great help for improving my productivity. Every person has its own reasons for using or not using generative AI. Nevertheless, I'm afraid that this technology - like many other productivity-increasing technologies - will become a matter of fact in our daily lifes. The issue here is how best to adapt it to our own advantage.Open-source LLMs should be preferred, of course. But I don't think that mere stubbornness is a very good strategy to deal with new technology.
"If we don’t use AI, we might be replaced by someone who will. What company would prefer a tech writer who fixes 5 bugs by hand to one who fixes 25 bugs using AI in the same timeframe, with a “good enough” quality level? We’ve already seen how DeepSeek AI, considered on par with ChatGPT’s quality, almost displaced more expensive models overnight due to the dramatically reduced cost. What company wouldn’t jump at this chance if the cost per doc bug could be reduced from $20 to $1 through AI? Doing tasks more manually might be a matter of intellectual pride, but we’ll be extinct unless we evolve."
https://idratherbewriting.com/blog/recursive-self-improvement-complex-tasks
@remixtures@tldr.nettime.org Climate change, you fuckhead.
@joachim@drupal.community Just because Silicon Valley companies over-engineer their models, that doesn't mean it must be necessarily so... Look at DeepSeek: https://github.com/deepseek-ai/open-infra-index/blob/main/202502OpenSourceWeek/day_6_one_more_thing_deepseekV3R1_inference_system_overview.md
@remixtures@tldr.nettime.org
I would argue against this argument since it’s proven that the usage of LLMs result in the author being less critical and thus more likely to accept broken code compared to writing it and then reviewing it.
The article below explains it better but the short version: Usage of AI decreases your ability to think and reason.
And I consider those skills vital for any programmer.
@DevWouter@mastodon.social If you only use just one AI model all the time for all tasks and never question its outputs, of course your critical thinking skills will likely not be very high. I guess this study is a bit flawed in the sense that it's probably looking for something that it's already there from the beginning. People who have less critical thinking skills tend to trust more in the outputs generated by one model. Personally, I tend to use at least three LLMs. Why? Because I always like to have the opinion of more than one voice. But these are critical thinking skills that can be taught.