Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
p1esk
80 days ago
|
parent
|
context
|
favorite
| on:
Tongyi DeepResearch – open-source 30B MoE Model th...
5090 has 32GB of RAM. Not sure if that’s enough to fit this model.
IceWreck
80 days ago
|
next
[–]
LlamaCPP supports offloading some experts in a MoE model to CPU. The results are very good and even weaker GPUs can run larger models at reasonable speeds.
n-cpu-moe in
https://github.com/ggml-org/llama.cpp/blob/master/tools/serv...
svnt
80 days ago
|
prev
[–]
It should fit enough of the layers to make it reasonably performant.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: