this post was submitted on 12 Jun 2023
12 points (100.0% liked)

LocalLLaMA

20 readers
1 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 1 year ago
MODERATORS
 

Hey all, which LLMs are good for roleplay? Is base Llama good? I've read Pygmalion is meant to be tweaked for it but haven't tried it yet.

Ideally I'm hoping for a model that can stick in character

top 3 comments
sorted by: hot top controversial new old
[–] swandi@sh.itjust.works 4 points 1 year ago* (last edited 1 year ago)

If you can run 13B, I would recommend the following models. The links are for the GGML models. Keep in mind that if you can run a 30B or larger model, there are other LLMs that will work better.

TheBloke/manticore-13b-chat-pyg-GGML

This one is my personal current favorite. I think that it's better than Chronos for short messages. For me it usually sticks to using asterisks for actions, and quotes for speech, which is what I prefer.

TheBloke/chronos-wizardlm-uc-scot-st-13B-GGML

This one feels more "clever" to me all around, and is currently very popular for roleplay. It produces results that are usually longer, so I feel like its better suited for longer dialogue. It also feels to me like it understands the scenarios better, and I usually get slightly more creative results from it.

[–] Pliny@lemmy.fmhy.ml 2 points 1 year ago

If you’re looking for chatbots -

SillyTavern with turbo 3.5 is pretty good but you have to pay for the OAI API.

My experience with pygmalion wasn’t great but some really like it.

Character AI is unquestionably the best model and the easiest to use however it has a strict NSFW policy.

I’ve never tried llama but people did rave about it when it first released. I don’t know how easy it is to get up and running though.

[–] ThrowsArrows@sopuli.xyz 1 points 1 year ago

It also really depends on VRAM IMO, I have a 4090 and these days I don't tend to touch anything under 30B (Wizard Uncensored is really good here) if I had dual 3090s I would likely be running a 65B model.

load more comments
view more: next ›