this post was submitted on 28 Sep 2024
214 points (100.0% liked)

Technology

37742 readers
75 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

Archived link

Since its founding in 2015, its leaders have said their top priority is making sure artificial intelligence is developed safely and beneficially. They’ve touted the company’s unusual corporate structure as a way of proving the purity of its motives. OpenAI was a nonprofit controlled not by its CEO or by its shareholders, but by a board with a single mission: keep humanity safe.

But this week, the news broke that OpenAI will no longer be controlled by the nonprofit board. OpenAI is turning into a full-fledged for-profit benefit corporation. Oh, and CEO Sam Altman, who had previously emphasized that he didn’t have any equity in the company, will now get equity worth billions, in addition to ultimate control over OpenAI.

In an announcement that hardly seems coincidental, chief technology officer Mira Murati said shortly before that news broke that she was leaving the company. Employees were so blindsided that many of them reportedly reacted to her abrupt departure with a “WTF” emoji in Slack.

WTF indeed.

top 23 comments
sorted by: hot top controversial new old
[–] storksforlegs 103 points 1 month ago

CEO Sam Altman, who had previously emphasized that he didn’t have any equity in the company, will now get equity worth billions, in addition to ultimate control over OpenAI.

what! You mean he stands to profit billions after lying about his intentions?! A techbro would never!!

[–] zante@lemmy.wtf 38 points 1 month ago (1 children)

comedy goldmine :

They could get up to 100 times what they put in, but beyond that, the money would go to the nonprofit, which would use it to benefit the public. For example, it could fund a universal basic income program to help people adjust to automation-induced joblessness.

[–] TimLovesTech@badatbeing.social 28 points 1 month ago

“If OpenAI were to retroactively remove profit caps from investments, this would in effect transfer billions in value from a non-profit to for-profit investors,” Jacob Hilton, a former employee of OpenAI who joined before it transitioned from a nonprofit to a capped-profit structure.

I'm sure the investors weren't selling him on the idea that if they got a bigger return he would as well, surely.

[–] PhilipTheBucket@ponder.cat 34 points 1 month ago (1 children)

I think that over the next few years Sam Altman is going to learn the same lessons that events have been trying to teach Elon Musk since circa 2021.

  1. You didn't build that. The people that work for you did.
  2. Being a big hero is contingent on you and your behavior, and can change.
  3. Those people who are giving you all this money aren't your comrades. When your usefulness is at its end, they won't give you a second thought.
[–] ClassifiedPancake@discuss.tchncs.de 10 points 1 month ago (1 children)

Elon Musk is doing fine though

[–] Cityshrimp@lemmy.sdf.org 7 points 1 month ago

Yeah, if anything; Musk is likely an example he’s aspiring to be.

[–] Vodulas 23 points 1 month ago (1 children)

Can't sell you out if you never bought in

[–] Anyolduser@lemmynsfw.com 7 points 1 month ago

I was about to say ...

Vox can speak for itself. Big sections of the public knew they were being sold a bill of goods.

[–] sonori 22 points 1 month ago

What, founder of cryptoscam Worldcoin is going to cash out of a project sold primarily on hype. Say it ain’t so. /s

[–] haerrii@feddit.org 21 points 1 month ago

About time they rebrand as ClosedAI.

[–] oftheair@lemmy.blahaj.zone 21 points 1 month ago

shocked pikachu face

Tech obsessives, tech obsessing

Captitalists capitalising.

[–] FlashMobOfOne 20 points 1 month ago (1 children)

It's WeWork and Adam Neumann all over again.

You couldn't pay me to invest in this shit and it feels a little insane that seemingly intelligent VC's are doing so.

[–] sunzu2@thebrainbin.org 12 points 1 month ago (2 children)

Don't give them your data folks!

You don't know what you inputs will be used for in the future but nobody also was thinking that Facebook posts from 2000 would be a large piece of a training data for these llms lol

[–] FlashMobOfOne 10 points 1 month ago* (last edited 1 month ago)

Definitely.

Also, don't invest in companies that hand total control to one person. That's a recipe for having that one idiot blow all of your money, like Adam Neumann did. (Fun fact: Toward the end of WeWork's heyday, Neumann was burning $3k in cash a minute.)

[–] xor@infosec.pub 2 points 1 month ago* (last edited 1 month ago) (1 children)

i want them trained on me so that our future robot overlords will respect me… maybe create some simulacrum of my consciousness to live on forever

[–] ChicagoTransplant@midwest.social 3 points 1 month ago (1 children)

You think they would expend resources recreating nobodies like us? Sam gets his digital construct immortality and we squat.

[–] xor@infosec.pub 1 points 1 month ago

i’ve been obsessively commenting on reddit for years… i’ll live on forever

[–] hddsx@lemmy.ca 19 points 1 month ago (1 children)

Gotta get out while the gettin is good. Otherwise, if you lose the copyright lawsuits… RIP

[–] QuentinCallaghan@sopuli.xyz 12 points 1 month ago

Generative AI has reached its peak after all.

[–] kibiz0r@midwest.social 18 points 1 month ago

just sold you out

They been sellin us out since the start. And they never even paid for us!

[–] tal@lemmy.today 8 points 1 month ago* (last edited 1 month ago)

I don't know whether Altman or the board is better from a leadership standpoint, but I don't think that it makes sense to rely on boards to avoid existential dangers for humanity. A board runs one company. If that board takes action that is a good move in terms of an existential risk for humanity but disadvantageous to the company, they'll tend to be outcompeted by and replaced by those who do not. Anyone doing that has to be in a position to span multiple companies. I doubt that market regulators in a single market could do it, even -- that's getting into international treaty territory.

The only way in which a board is going to be able to effectively do that is if one company, theirs, effectively has a monopoly on all AI development that could pose a risk.

I am SHOCKED.

[–] 21Cabbage@lemmynsfw.com 5 points 1 month ago

sad trombone