this post was submitted on 30 Sep 2023
635 points (100.0% liked)

Open Source

823 readers
13 users here now

All about open source! Feel free to ask questions, and share news, and interesting stuff!

Useful Links

Rules

Related Communities

Community icon from opensource.org, but we are not affiliated with them.

founded 5 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] mojo@lemm.ee 33 points 1 year ago (3 children)

As much as I love Mozilla, I know they're going to censor it (sorry, the word is "alignment" now) the hell out of it to fit their perceived values. Luckily if it's open source then people will be able to train uncensored models

[–] DigitalJacobin@lemmy.ml 52 points 1 year ago (3 children)

What in the world would an "uncensored" model even imply? And give me a break, private platforms choosing to not platform something/someone isn't "censorship", you don't have a right to another's platform. Mozilla has always been a principled organization and they have never pretended to be apathetic fence-sitters.

[–] lemann@lemmy.one 13 points 1 year ago

I fooled around with some uncensored LLaMA models, and to be honest if you try to hold a conversation with most of them they tend to get cranky after a while - especially when they hallucinate a lie and you point it out or question it.

I will never forget when one of the models tried to convince me that photosynthesis wasn't real, and started getting all snappy when I said I wasn't accepting that answer 😂

Most of the censorship "fine tuning" data that I've seen (for LoRA models anyway) appears to be mainly scientific data, instructional data, and conversation excerpts

[–] TheWiseAlaundo@lemmy.whynotdrs.org 10 points 1 year ago (1 children)

There's a ton of stuff ChatGPT won't answer, which is supremely annoying.

I've tried making Dungeons and Dragons scenarios with it, and it will simply refuse to describe violence. Pretty much a full stop.

Open AI is also a complete prude about nudity, so Eilistraee (Drow godess that dances with a sword) just isn't an option for their image generation. Text generation will try to avoid nudity, but also stop short of directly addressing it.

Sarcasm is, for the most part, very difficult to do... If ChatGPT thinks what you're trying to write is mean-spirited, it just won't do it. However, delusional/magical thinking is actually acceptable. Try asking ChatGPT how licking stamps will give you better body positivity, and it's fine, and often unintentionally very funny.

There's plenty of topics that LLMs are overly sensitive about, and uncensored models largely correct that. I'm running Wizard 30B uncensored locally, and ChatGPT for everything else. I'd like to think I'm not a weirdo, I just like D&d... a lot, lol... and even with my use case I'm bumping my head on some of the censorship issues with LLMs.

[–] Spzi@lemm.ee 2 points 1 year ago (1 children)

Interesting, may I ask you a question regarding uncensored local / censored hosted LLMs in comparison?

There is this idea censorship is required to some degree to generate more useful output. In a sense, we somehow have to tell the model which output we appreciate and which we don't, so that it can develop a bias to produce more of the appreciated stuff.

In this sense, an uncensored model would be no better than a million monkeys on typewriters. Do we differentiate between technically necessary bias, and political agenda, is that possible? Do uncensored models produce more nonsense?

That's a good question. Apparently, these large data companies start with their own unaligned dataset and then introduce bias through training their model after. The censorship we're talking about isn't necessarily trimming good input vs. bad input data, but rather "alignment" which is intentionally introduced after.

Eric Hartford, the man who created Wizard (the LLM I use for uncensored work), wrote a blog post about how he was able to unalign LLAMA over here: https://erichartford.com/uncensored-models

You probably could trim input data to censor output down the line, but I'm assuming that data companies don't because it's less useful in a general sense and probably more laborious.

[–] mojo@lemm.ee 9 points 1 year ago (2 children)

Anything that prevents it from my answering my query. If I ask it how to make me a bomb, I don't want it to be censored. It's gathering this from public data they don't own after all. I agree with Mozilla's principles, but also LLMs are tools and should be treated as such.

[–] salarua@sopuli.xyz 19 points 1 year ago* (last edited 1 year ago) (1 children)

shit just went from 0 to 100 real fucking quick

for real though, if you ask an LLM how to make a bomb, it's not the LLM that's the problem

[–] mojo@lemm.ee 6 points 1 year ago* (last edited 1 year ago) (2 children)

If it has the information, why not? Why should you be restricted by what a company deems appropriate. I obviously picked the bomb example as an extreme example, but that's the point.

Just like I can demonize encryption by saying I should be allowed to secretly send illegal content. If I asked you straight up if encryption is a good thing, you'd probably agree. If I mentioned its inevitable bad use in a shocking manner, would you defend the ability to do that, or change your stance that encryption is bad?

To have a strong stance means also defending the potential harmful effects, since they're inevitable. It's hard to keep values consistent, even when there are potential harmful effects of something that's for the greater good. Encryption is a perfect example of that.

[–] Lionir 10 points 1 year ago (2 children)

This is a false equivalence. Encryption only works if nobody can decrypt it. LLMs work even if you censor illegal content from their output.

[–] mojo@lemm.ee 4 points 1 year ago (2 children)

You miss the point. My point is that if you want to have a consistent view point, you need to acknowledge and defend the harmful sides. Encryption can objectively cause harm, but it should absolutely still be defended.

[–] Lionir 10 points 1 year ago (2 children)

This is just enlightened centrism. No. Nobody needs to defend the harms done by technology.

We can accept the harm if the good is worth it - we have no need to defend it.

LLMs can work without the harm.

It makes sense to make technology better by reducing the harm they cause when it is possible to do so.

[–] janguv@lemmy.dbzer0.com 2 points 1 year ago

He would have been better off not talking about harm directly but the ability to cause harm; he actually used that wording in an earlier comment in this chain. (Basically strawmanned himself lol.)

Because as a standalone argument for encryption, it's fairly sound – hey, the ability of somebody to cause harm via encrypted messaging channels is the selfsame ability to do good [/prevent spying/protect privacy, whistleblowers/etc], and since the good outweighs the bad, we have to protect the ability to cause harm (sadly).

The problem is it's still disanalogous – the ability to cause harm via LLM use is not the selfsame ability to do good (or to do otherwise what you want). My LLM's refusing to tell me how to make a bomb has no impact on its ability to tell me how to make a pasta bake.

[–] wagesj45@kbin.social 1 points 1 year ago

Define harm.

[–] bear@slrpnk.net 6 points 1 year ago* (last edited 1 year ago) (1 children)

What the fuck is this "you should defend harm" bullshit, did you hit your head during an entry level philosophy class or something?

The reason we defend encryption even though it can be used for harm is because breaking it means you can't use it for good, and that's far worse. We don't defend the harm it can do in and of itself; why the hell would we? We defend it in spite of the harm because the good greatly outweighs the harm and they cannot be separated. The same isn't true for LLMs.

[–] mojo@lemm.ee 2 points 1 year ago (1 children)

We don't believe that at all, we believe privacy is a human right. Also you're just objectively wrong about LLMs. Offline uncensored LLMs already exist, and will perpetually exist. We don't defend tools doing harm, we acknowledge it.

[–] bear@slrpnk.net 3 points 1 year ago

We don't believe that at all, we believe privacy is a human right.

That's just a different way to phrase what I said about defending the good side of encryption.

Offline uncensored LLMs already exist, and will perpetually exist

I didn't say they don't exist, I said that the help and harm aren't inseparable like with encryption.

We don't defend tools doing harm, we acknowledge it.

"My point is that if you want to have a consistent view point, you need to acknowledge and defend the harmful sides."

If you want to walk it back, fine, but don't pretend like you didn't say it.

[–] jasory@programming.dev 2 points 1 year ago (1 children)

Encryption only works if certain parties can't decrypt it. Strong encryption means that the parties are everyone except the intended recipient, weak encryption still works even if 1 percent of the eavesdroppers can decrypt it.

[–] Lionir 2 points 1 year ago (1 children)

I mean, I don't understand the point of an encryption that people can decrypt without it being intended. Just seems like theatre to me.

But yeah, obviously the intended parties have to be able to decrypt it. I messed up in my wording.

[–] jasory@programming.dev 1 points 1 year ago

You realise that most encryption can be decrypted by third-party? Many cryptography libraries have huge flaws, even the Handbook of Applied Cryptography was encouraging using Damgard et al's parameters for prime selection even though the original authors never claimed the accuracy that others assumed (without basis). Even now, can you guess how many cryptography libraries would be broken if someone found a BPSW pseudoprime? And we have arguments that they probably exist, but crypto developers just ignore it either out of ignorance or laziness.

In summary, it's all theatre, you just want to deny access to enough parties that it makes you comfortable.

[–] Spzi@lemm.ee 3 points 1 year ago (1 children)

If it has the information, why not?

Naive altruistic reply: To prevent harm.

Cynic reply: To prevent liabilities.

If the restaurant refuses to put your fries into your coffee, because that's not on the menu, then that's their call. Can be for many reasons, but it's literally their business, not yours.

If we replace fries with fuse, and coffee with gun powder, I hope there are more regulations in place. What they sell and to whom and in which form affects more people than just buyer and seller.

Although I find it pretty surprising corporations self-regulate faster than lawmakers can say 'AI' in this case. That's odd.

[–] mojo@lemm.ee 1 points 1 year ago

This is very well said. They're allowed to not serve you these things, but we should still be able to use these things ourselves and make our glorious gun powder fries coffee with a spice of freedom all we want!

[–] StickBugged@lemm.ee 8 points 1 year ago (1 children)

If you ask how to build a bomb and it tells you, wouldn't Mozilla get in trouble?

[–] mojo@lemm.ee 7 points 1 year ago* (last edited 1 year ago) (1 children)

Do gun manufacturers get in trouble when someone shoots somebody?

Do car manufacturers get in trouble when someone runs somebody over?

Do search engines get in trouble if they accidentally link to harmful sites?

What about social media sites getting in trouble for users uploading illegal content?

Mozilla doesn't need to host an uncensored model, but their open source AI should be able to be trained to uncensored. So I'm not asking them to host this themselves, which is an important distinction I should have made.

Which uncensored LLMs exist already, so any argument about the damage they can cause is already possible.

[–] Spzi@lemm.ee 1 points 1 year ago (1 children)

Do car manufacturers get in trouble when someone runs somebody over?

Yes, if it can be shown the accident was partially caused by the manufacturer's neglect. If a safety measure was not in place or did not work properly. Or if it happens suspiciously more often with models from this brand. Apart from solid legal trouble, they can get into PR trouble if many people start to think that way, no matter if it's true.

[–] mojo@lemm.ee 1 points 1 year ago (1 children)
[–] Spzi@lemm.ee 1 points 1 year ago (1 children)

Then let me spell it out: If ChatGPT convinces a child to wash their hands with self-made bleach, be sure to expect lawsuits and a shit storm coming for OpenAI.

If that occurs, but no liability can be found on the side of ChatGPT, be sure to expect petitions and a shit storm coming for legislators.

We generally expect individuals and companies to behave in society with peace and safety in mind, including strangers and minors.

Liabilities and regulations exist for these reasons.

[–] mojo@lemm.ee 1 points 1 year ago

Again... this is still missing the point.

Let me spell it out: I'm not asking for companies to host these services. They are not held liable.

For this example to be related, ChatGPT would need to be open source and let you plug in your own model. We should have the freedom to plug in our own trained models, even uncensored ones. This is the case with LLAma and other AI systems right now, and I'm encouraging Mozilla's AI to allow us to do the same thing.

[–] whoisearth@lemmy.ca 5 points 1 year ago

As an aside I'm in corporate. I love how gung ho we are on AI meanwhile there are lawsuits and potential lawsuits and investigative journalism coming out on all the shady shit AI and their companies are doing. Meanwhile you know the SMT ain't dumb they know about all this shit and we are still driving forward.

load more comments (1 replies)
[–] KingThrillgore@lemmy.ml 29 points 1 year ago

I want to give them the benefit of the doubt. I really do. I am going to watch this with a critical eye, however.

[–] kubica@kbin.social 24 points 1 year ago

Wishing they would say something more, the site has been like that for some time.

[–] donuts@kbin.social 19 points 1 year ago

All I want to know is if they are going to pillage people's private data and steal their creative IP or not.

Ethical AI starts and ends with open, transparent, legitimate and ethically sourced training data sets.

[–] baduhai@sopuli.xyz 15 points 1 year ago

Cautiously optimistic about this one.

[–] CaptKoala@lemmy.ml 14 points 1 year ago* (last edited 1 year ago)

Couldn't give a fuck, there's already far too much bad blood regarding any form of AI for me.

It's been shoved in my face, phone and computer for some time now. The best AI is one that doesn't exist. AGI can suck my left nut too, don't fuckin care.

Give me livable wages or give me death, I care not for anything else at this point.

Edit: I care far more about this for privacy reasons than the benefits provided via the tech.

The fact these models reached "production ready" status so quickly is beyond concerning, I suspect the companies are hoping to harvest as much usable data as possible before being regulated into (best case) oblivion. It really no longer seems that I can learn my way out of this, as I've been doing since the beginning, as the technology is advancing too quickly for users, let alone regulators to keep it in check.

[–] lowleveldata@programming.dev 13 points 1 year ago (2 children)
[–] Limitless_screaming@kbin.social 37 points 1 year ago* (last edited 1 year ago)

More transparent about data collection, and less likely to reinforce biases. Mozilla vision for trustworthy AI

[–] AdmiralShat@programming.dev 21 points 1 year ago

Open source

[–] bahmanm@lemmy.ml 13 points 1 year ago

Something that I'll definitely keep an eye on. Thanks for sharing!

Incredibly welcomed. We need more ethical, non-profit AI researchers in the sea of corporate for-profit AI companies.

[–] soulfirethewolf@lemdro.id 6 points 1 year ago (1 children)

Please just put the 30 million into improving the browser. Not all this dumb stuff

[–] fettuccinecode@programming.dev 10 points 1 year ago

Offline translation feature visible in Firefox 108 and later is AI powered. And works good enough for now.

[–] Turun@feddit.de 3 points 1 year ago* (last edited 1 year ago) (1 children)

In which ways does this differ from stability ai, which made stable diffusion and also have a LLM afaik?

[–] RobotToaster@mander.xyz 7 points 1 year ago

The stability models aren't open source, the moralistic licence they release under violates the open source definition.

[–] PhlubbaDubba@lemm.ee 1 points 1 year ago (2 children)

Isn't AI impossible to meaningfully open source because of how learning models work?

[–] blind3rdeye@lemm.ee 8 points 1 year ago

No? The code for the model can be open-source - and that's pretty valuable. The training data can be made openly available too - and that's perhaps even more valuable. And the post-training weights for the model can be made open too.

Each of those things is very meaningful and useful. If those things are open, then the AI can be used and adjusted for different contexts. It can be run offline; it can be retrained or tweaked. It can be embedded into other software. etc. It is definitely meaningful to open source that stuff.

[–] OsrsNeedsF2P@lemmy.ml 5 points 1 year ago

Stable Diffusion has entered the chat

load more comments
view more: next ›