Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My current and previous MacBooks have had 16GB and I've been fine with it, but given local models I think I'm going to have to go to whatever will be the maximum RAM available for the next one. It runs 13b models quite well with Ollama, but I tried `mixtral-8x7b` and saw 0.25 tokens/second speeds; I suppose I should be amazed that it ran at all.

Similarly, I am for the first time going to care about how much RAM is in my next iPhone. My iPhone 13's 4GB is, as in your case, inadequate.



I recently upgraded from my M1 Air specifically because I had purchased it with 8gb -- silly me. Now I have 24gb, and if the Air line had more available I would have sprung for 32, or even 64gb. But I'm not paying for a faster processor just to get more memory :-/


I got an 8GB M1 from work, and I've been frankly astonished with what even this machine can do. Yes, it'll run the 4bit llama3 quants - not especially fast, mind, but not unusably slow either. The problem is that you can't do a huge amount else.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: