this post was submitted on 28 Feb 2024
88 points (100.0% liked)

Privacy

789 readers
42 users here now

A place to discuss privacy and freedom in the digital world.

Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.

In this community everyone is welcome to post links and discuss topics related to privacy.

Some Rules

Related communities

Chat rooms

much thanks to @gary_host_laptop for the logo design :)

founded 5 years ago
MODERATORS
 

This might also be an automatic response to prevent discussion. Although I'm not sure since it's MS' AI.

top 14 comments
sorted by: hot top controversial new old
[–] GammaGames 15 points 8 months ago* (last edited 8 months ago) (1 children)

Tbf your evidence is >20 year old documents, general EEE behavior (without examples), and something that isn’t really relevant to your initial claim. I’m not surprised it decided to respectfully hang up. Did you want to to argue?

[–] digdilem@lemmy.ml 7 points 8 months ago

OP definitely wanted an argument - but it can only have been for imaginary internet points.

Arguing with an AI is pointless - it's intellectual masturbation - and using biased and weak examples is, if anything, going to train the opponent to be more dumb. (Anyone else remember teaching Megahal to swear on IRC?)

[–] slacktoid@lemmy.ml 13 points 8 months ago

This is why we should be boycotting corporate owned LLMs.

[–] eveninghere 12 points 8 months ago* (last edited 8 months ago)

This is actually an unfair experiment. This behavior is not specific to questions about MS. Copilot is simply incapable of this type of discussion.

Copilot tends to just paraphrase text it read, and when I challenge the content, it ends the conversation like this, instead of engaging in a meaningful dialogue.

[–] Sims@lemmy.ml 10 points 8 months ago (1 children)

Every single Capitalist model or corporation will do this deliberately with all their AI integration. ALL corporations will censor their AI integration to not attack the corporation or any of their strategic 'interests'. The Capitalist elite in the west are already misusing wokeness (i'm woke) to cause global geo-political splits and all western big tech are following the lead (just look at Gemini), so they are all biased towards the fake liberal narrative of super-wokeness, 'democracy'/freedumb, Ukraine good, Taiwan not part of China, Capitalism good and all the other liberal propaganda and bs. Its like a liberal cancer that infects all AI tools. Nasty.

Agree or disagree with that, but none of us probably want elite psychopaths to decide what we should think/feel about the world, and its time to ditch ALL corporate AI services and promote private, secure and open/free AI - not censored or filled with liberal dogmas and artificial ethics/morals from data to finetuning.

load more comments (1 replies)
[–] otacon239@feddit.de 9 points 8 months ago

ChatGPT provided a pretty good response to this, so MS likely added some clauses to avoid anything negative regarding themselves.

https://chat.openai.com/share/67d28f94-7788-4044-ab40-73f8e46a32c7

[–] toastal@lemmy.ml 4 points 8 months ago (1 children)

Developers can stop using Microsoft products today; say NO to neo-EEE including Windows, WSL, GitHub, Sponsors, Copilot, VS Code, Codespaces, Azure, npm, & Teams

[–] Cwilliams 1 points 8 months ago

Good luck with that

[–] LWD@lemm.ee 4 points 8 months ago (1 children)

Large language model training is based on more than one model at a time, if that's the right term for it. One of them is the amalgam of answers from the internet (just imagine feeding Reddit into a Markov bot). The other is handcrafted responses by the corporation that runs the robot, which allow it to create (for lack of a better term) "politically correct" responses that will do everything from keeping things g-rated, remaining civil, preventing suggesting acts of terrorism, and protecting the good name of the corporation itself from being questioned.

Both of these models run on your question at the same time.

[–] Hotzilla@sopuli.xyz 4 points 8 months ago (1 children)

Copilot runs with GPT4-turbo. It is not trained differently than openai's GPT4-turbo, but it has different system prompts than openai, which tend to make it more easy to just quit discussion. I have never seen openai to say that I will stop this conversation, but copilot does it daily.

[–] LWD@lemm.ee 1 points 8 months ago* (last edited 8 months ago) (1 children)

So by "different system prompts", you mean Microsoft injects something more akin to their own modifiers into the prompt before passing it over to OpenAI?

(The same way somebody might modify their own prompt, "explain metaphysics" with their own modifiers like "in the tone of a redneck"?)

I assumed OpenAI could slot in extra training data as a whole extra component, but that also makes sense to me... And would probably require less effort.

[–] Hotzilla@sopuli.xyz 3 points 8 months ago

Yeah, pretty much like that, in Azure and paid openai both let you modify the system prompt also. There is also a creativity (temperature) property that can be modified. When too high, it will hallucinate more, if too low, it will give same output everytime.

Retraining the model costs like hundred million and weeks of computing power.

[–] Quintus@lemmy.ml 2 points 8 months ago

Edited the image link to point to the correct one.

[–] doolijb 1 points 8 months ago

Ask it what other chatbot it recommends that will answer your questions