Might be helpful for those that
- don’t have access to hardware that can run things locally
- understand the benefits and limitations of generative AI
Link: https://duckduckgo.com/?q=DuckDuckGo&ia=chat
As a nice coincidence, one of the first results when I searched for a news update was this discussion:
https://discuss.privacyguides.net/t/adding-a-new-category-about-ai-chatbots/17860/2
Is there a YouTube video under 10 minutes that compares the different AI models available from DuckDuckGo?
Dunno, but Llama 3 is the best open source model and Claude 3 is the best overall model they offer.
You provided no reasoning but I choose to just believe you. Thank you wise person in the Internet.
I can reaffirm what they said with slightly more proof.
https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard
Thanks for the info
I use mixtral8x7b locally and it’s been great. I am genuinely excited to see ddg offering it and the service in general. Now I can use this service when not on my network.
What GPU are you using to run it? And what UI are you using to interface with it? (I know of gpt4all and the generic sounding ui-text-generation program or something)
I am using this: https://github.com/oobabooga/text-generation-webui … It is running great with my AMD 7900XT. It also ran great with my 5700xt. It sets up itself within a conda virtual environment so it takes all mess out of getting the packages to work correctly. It can use NVIDIA cards too.
Once you get it installed you can then get your models from huggingface.co
I’m on arch, btw. ;)
Edit: I just went and reinstalled it and saw it supports these gpus
That’s right, “text-generation-webui”. At least its unambiguous lol. Thanks for sharing.
A lot of it might come down to individual tasks or personal preference.
Personally I liked Claude better than GTP3.5 for general queries, and I have yet to explore the other two
Thank you.