this post was submitted on 13 Jan 2024
43 points (100.0% liked)

Technology

37746 readers
50 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
top 7 comments
sorted by: hot top controversial new old
[–] luciole 17 points 10 months ago* (last edited 10 months ago) (2 children)

Suchman and Myers West both pointed to OpenAI’s close partnership with Microsoft, a major defense contractor, which has invested $13 billion in the LLM maker to date and resells the company’s software tools.

That explains it. Microsoft wants to cash in on their massive investment in OpenAI by embedding ChatGPT into every bit of software they can. Defense being an important sector for them, I'm surprised the military ban was ever in OpenAI's usage policy.

[–] maynarkh@feddit.nl 4 points 10 months ago

Stupid question, why would they need to? Couldn't they license the models under a ToS that is totally different from the public one? Isn't the public ToS just for Joe Schmoes off the street?

[–] java 2 points 10 months ago

That explains it. Microsoft wants to cash in on their massive investment in OpenAI by embedding ChatGPT into every bit of software they can.

Given how slow and laggy ChatGPT4 is, they're running ahead of the train. Ultimately, this will lead existing customers to competitors.

[–] frog 12 points 10 months ago

Yep, OpenAI is totally benevolent and only has our best interests at heart.

[–] petrescatraian@libranet.de 5 points 10 months ago

...Guess we're fucked...

Now I really wanna roll back the time a bit.

[–] autotldr@lemmings.world 2 points 10 months ago

🤖 I'm a bot that provides automatic summaries for articles:

Click here to see the summaryOpenAI this week quietly deleted language expressly prohibiting the use of its technology for military purposes from its usage policy, which seeks to dictate how powerful and immensely popular tools like ChatGPT can be used.

“We aimed to create a set of universal principles that are both easy to remember and apply, especially as our tools are now globally used by everyday users who can now also build GPTs,” OpenAI spokesperson Niko Felix said in an email to The Intercept.

Suchman and Myers West both pointed to OpenAI’s close partnership with Microsoft, a major defense contractor, which has invested $13 billion in the LLM maker to date and resells the company’s software tools.

The changes come as militaries around the world are eager to incorporate machine learning techniques to gain an advantage; the Pentagon is still tentatively exploring how it might use ChatGPT or other large-language models, a type of software tool that can rapidly and dextrously generate sophisticated text outputs.

While some within U.S. military leadership have expressed concern about the tendency of LLMs to insert glaring factual errors or other distortions, as well as security risks that might come with using ChatGPT to analyze classified or otherwise sensitive data, the Pentagon remains generally eager to adopt artificial intelligence tools.

Last year, Kimberly Sablon, the Pentagon’s principal director for trusted AI and autonomy, told a conference in Hawaii that “[t]here’s a lot of good there in terms of how we can utilize large-language models like [ChatGPT] to disrupt critical functions across the department.”


Saved 79% of original text.