I’ve just re-discovered ollama and it’s come on a long way and has reduced the very difficult task of locally hosting your own LLM (and getting it running on a GPU) to simply installing a deb! It also works for Windows and Mac, so can help everyone.
I’d like to see Lemmy become useful for specific technical sub branches instead of trying to find the best existing community which can be subjective making information difficult to find, so I created [email protected] for everyone to discuss, ask questions, and help each other out with ollama!
So, please, join, subscribe and feel free to post, ask questions, post tips / projects, and help out where you can!
Thanks!
This is all new to me, so I’ll have to do a bit of homework on this. Thanks for the detailed and linked reply!
I was a bit mistaken, these are the models you should consider:
https://huggingface.co/mlx-community/Qwen3-4B-4bit-DWQ
https://huggingface.co/AnteriorAI/gemma-3-4b-it-qat-q4_0-gguf
https://huggingface.co/unsloth/Jan-nano-GGUF (specifically the UD-Q4 or UD-Q5 file)
they are state-of-the-art at this size, as far as I know.
Awesome, I’ll give these a spin and see how it goes. Much appreciated!