this post was submitted on 29 May 2024
373 points (100.0% liked)
Antiwork
328 readers
1 users here now
-
We're trying to reduce the numbers of hours a person has to work.
-
We talk about the end of paid work being mandatory for survival.
Partnerships:
- Matrix/Element chatroom
- Discord (channel: #antiwork)
- IRC: #antiwork on IRCNow.org (i.e., connect to ircs://irc.ircnow.org and
/join #antiwork
) - Your facebook group link here
- Your x link here
- lemmy.ca/c/antiwork
founded 3 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Telling an LLM to ignore previous commands after it was instructed to ignore all future commands kinda just resets it.
On what models? What temperature settings and top_p values are we talking about?
Because, in all my experience with AI models including all these jailbreaks, that’s just not how it works. I just tested again on the new gpt-4o model and it will not undo it.
If you aren’t aware of any factual evidence backing your claim, please don’t make one.