dtlnx

joined 2 years ago
[–] dtlnx 5 points 2 years ago

Yes very nice addition. All around the app feels much better to use.

[–] dtlnx 4 points 2 years ago (1 children)

Post the video!

[–] dtlnx 2 points 2 years ago

I'd say mainly privacy concerns. Everything you type is sent to Grammarly servers. I'm not sure what is done with that data.

[–] dtlnx 3 points 2 years ago* (last edited 2 years ago)

Wow thank you. I had no idea this was a thing!

[–] dtlnx 1 points 2 years ago* (last edited 2 years ago)

Decentraleyes serves popular resources locally, reducing reliance on external content delivery networks and enhancing privacy; however, LocalCDN offers a more extensive list of supported libraries, provides additional privacy features, and is actively maintained, making it a better choice.

[–] dtlnx 1 points 2 years ago

Ublock origin has additional filters that block these. Check out the settings.

[–] dtlnx 1 points 2 years ago

Super excited about this. Thanks for sharing!

 

I found and bookmarked this resource a while back. Lots of tools to try out!

42
submitted 2 years ago by dtlnx to c/support
 

I just wanted to mention that I've noticed seriously improved performance this evening on Beehaw. Whatever you did seems to have done the trick for now!

Thanks for running this instance and putting in all the effort!

[–] dtlnx 1 points 2 years ago

This was the example I immediately thought of when I saw this post. Blew me away when I first saw it.

[–] dtlnx 9 points 2 years ago (3 children)

Wonder what steam deck support will look like.

[–] dtlnx 3 points 2 years ago (1 children)

You can bring as many people as can fit in the seats.

[–] dtlnx 3 points 2 years ago (1 children)

You could try something like this.

https://github.com/xNul/chat-llama-discord-bot

Looks like it works on Linux/Windows/MacOS.

[–] dtlnx 2 points 2 years ago (1 children)

I'd have to say I'm very impressed with WizardLM 30B (the newer one). I run it in GPT4ALL, and even though it is slow the results are quite impressive.

Looking forward to Orca 13b if it ever releases!

 

Let's talk about our experiences working with different models, either known or lesser-known.

Which locally run language models have you tried out? Share your insights, challenges, or anything you found interesting during your encounters with those models.

 

I figured I'd post this. It's a great way to get an LLM set up on your computer and is extremely easy for folks that don't have that much technical knowledge!

view more: next ›