16 GB of system memory vs 16 GB of VRAM / unified memory (? I think this is the case for recent Apple machines) makes a huge difference. The former is more of a neat party trick (depending on who you hang out with) and the latter is actually something you can use as a tool to be more efficient.
I recently bought a 7900 XTX with 24 GB of VRAM, but the model I currently run can easily run in 16 GB (6 bit llama 3 8b). It's fast enough and high enough quality that I can use it for processing information that I don't feel comfortable sharing with hosted services. It's definitely not the best of the best as far as what models are able to do right now, but it's surprisingly useful.
Also keep in mind: 32GB of RAM is more than enough for normal usage, but it's useless for (this kind of state-of-the-art-) ML unless you also have a graphics card of the kind that won't fit in a laptop.
Unless of course you were talking about VRAM, in which case 16GB is still not great for ML (to be fair, the 24GB of an RTX 4090 aren't either, but there's not much more you can do in the space of consumer hardware). I don't think the other commenter was talking about VRAM, because 16GB VRAM are very overkill for everyday computing... and pretty decent for most gaming.
It's almost a myth these days that you need top end GPUs to run models. Some smaller models (say <10B parameters with quantization) run on CPUs fine. Of course you won't have hundreds of tokens per sec, but you'll probably get around ~10 or so, which can be sufficient depending on your use case.
I'll keep this in mind!