this post was submitted on 08 Jun 2023
19 points (100.0% liked)

LocalLLaMA

20 readers
1 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 1 year ago
MODERATORS
 

Hi, you've found this ~~subreddit~~ Community, welcome!

This Community is intended to be a replacement for r/LocalLLaMA, because I think that we need to move beyond centralized Reddit in general (although obviously also the API thing).

I will moderate this Community for now, but if you want to help, you are very welcome, just contact me!

I will mirror or rewrite posts from r/LocalLLama for this Community for now, but maybe we could eventually all move to this Community (or any Community on Lemmy, seriously, I don't care about being mod or "owning" it).

you are viewing a single comment's thread
view the rest of the comments
[–] pax@sh.itjust.works 0 points 1 year ago (5 children)

llama cpp is crashy on my computer, it even didn't compiled.

[–] SkySyrup@sh.itjust.works 0 points 1 year ago (4 children)

Huh, that's interesting. If llama.cpp doesn't work, try https://github.com/oobabooga/text-generation-webui which (tries to) provides a user-friendly(-ier) experience.

[–] pax@sh.itjust.works 0 points 1 year ago (3 children)

it launches just fine, but when loading a model it says something like: successfully loaded none

[–] SkySyrup@sh.itjust.works 0 points 1 year ago (1 children)

Have you put your model in the "models" folder in the "text-generation-webui" folder? If you have, then navigate over to the "Model" section (button for the menu should be at the top of the page) and select your model using the box below the menu.

[–] pax@sh.itjust.works 0 points 1 year ago (1 children)

I tried to download an example one, cus I don't have any model, failed.

load more comments (1 replies)
load more comments (1 replies)
load more comments (1 replies)