Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

5090 has 32GB of RAM. Not sure if that’s enough to fit this model.


LlamaCPP supports offloading some experts in a MoE model to CPU. The results are very good and even weaker GPUs can run larger models at reasonable speeds.

n-cpu-moe in https://github.com/ggml-org/llama.cpp/blob/master/tools/serv...


It should fit enough of the layers to make it reasonably performant.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: