this post was submitted on 05 Jan 2024
33 points (100.0% liked)

graybeard

4 readers
2 users here now

Stories, links, experiences from calculator manipulators with a few grays in their beard

founded 1 year ago
MODERATORS
 

No good deed goes unpunished.

you are viewing a single comment's thread
view the rest of the comments
[โ€“] autotldr@lemmings.world 3 points 10 months ago

๐Ÿค– I'm a bot that provides automatic summaries for articles:

Click here to see the summaryGenerative AI models like Google Bard and GitHub Copilot have a user problem: Those who rely on software assistance may not understand or care about the limitations of these machine learning tools.

On Tuesday, Daniel Stenberg, the founder and lead developer of widely used open source projects curl and libcurl, raised this issue in a blog post in which he describes the rubbish problem created by cavalier use of AI for security research.

He said that the report, produced with the help of Google Bard, "reeks of typical AI style hallucinations: it mixes and matches facts and details from old security issues, creating and making up something new that has no connection with reality."

After posting a series of questions to the forum and receiving dubious answers from the bug reporting account, Stenberg concluded no such flaw existed and suspected that he had been conversing with an AI model.

Even so, he expects the ease and utility of these tools, coupled with the financial incentive of bug bounties, will lead to more shoddy LLM-generated security reports, to the detriment of those on the receiving end.

Aboukhadijeh said Socket has been using LLMs in conjunction with human reviewers to detect vulnerable malicious open source packages in the JavaScript, Python, and Go ecosystems.


Saved 73% of original text.