Hi, you’ve found this subreddit Community, welcome!
This Community is intended to be a replacement for r/LocalLLaMA, because I think that we need to move beyond centralized Reddit in general (although obviously also the API thing).
I will moderate this Community for now, but if you want to help, you are very welcome, just contact me!
I will mirror or rewrite posts from r/LocalLLama for this Community for now, but maybe we could eventually all move to this Community (or any Community on Lemmy, seriously, I don’t care about being mod or “owning” it).
scrollbars ( @scrollbars@lemmy.ml ) English4•2 years agoHello! This is the one community that I was a bit worried about finding an equivalent of outside of reddit. Hopefully more of us migrate over.
hendrik ( @hendrik@lemmy.ml ) English3•2 years agothank you for using a decent platform. i doubt more than 20 people will migrate from reddit… but it make the world a better place, anyways.
mellery ( @mellery@lemmy.one ) English2•2 years agoHello! Thanks for setting this up
dirac_field ( @dirac_field@lemmy.one ) English2•2 years agoLate to the party, but thanks for setting this up! I suspect the overlap of people both using local LLMs and hungry for reddit alternatives will be higher than average
Barbarian ( @Barbarian@sh.itjust.works ) English1•2 years agoYou should make a post in !findacommunity@lemmy.ml
pax ( @pax@sh.itjust.works ) English0•2 years agoI could help with moderation, but I have a question, how to set up LLAma on my mac computer? any tips?
Hi, sure, thank you so much for helping out! As for LLaMA, I would point you at llama.cpp, (https://github.com/ggerganov/llama.cpp) which is the absolute bleeding edge, but also has pretty useful instructions on the page (https://github.com/ggerganov/llama.cpp#usage). You could also use Kobold.cpp, but I don’t have any experience with it, so I can’t help you if you have issues.
pax ( @pax@sh.itjust.works ) English0•2 years agollama cpp is crashy on my computer, it even didn’t compiled.
Huh, that’s interesting. If llama.cpp doesn’t work, try https://github.com/oobabooga/text-generation-webui which (tries to) provides a user-friendly(-ier) experience.
pax ( @pax@sh.itjust.works ) English0•2 years agoit launches just fine, but when loading a model it says something like: successfully loaded none
Have you put your model in the “models” folder in the “text-generation-webui” folder? If you have, then navigate over to the “Model” section (button for the menu should be at the top of the page) and select your model using the box below the menu.
pax ( @pax@sh.itjust.works ) English0•2 years agoI tried to download an example one, cus I don’t have any model, failed.
I’d recommend the model Wizard-Vicuna-7b-Uncensored (i know it’s like a sentence https://huggingface.co/TheBloke/Wizard-Vicuna-7B-Uncensored-GGML) direct download link is here: https://huggingface.co/TheBloke/Wizard-Vicuna-7B-Uncensored-GGML/blob/main/Wizard-Vicuna-7B-Uncensored.ggmlv3.q5_1.bin
dtlnx ( @dtlnx@beehaw.org ) English1•2 years agoTry this. It works great for me.
pax ( @pax@sh.itjust.works ) English1•2 years agogpt4all is dump, it even didn’t tried to be smart.
dtlnx ( @dtlnx@beehaw.org ) English1•2 years agoThere are multiple models to try! The default one isn’t great, I will give you that.