Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It doesn't seem like Nvidia even has any 3nm GPUs on the market. But sure. When you control for power efficiency, it turns out there's no difference at all!


Process is not equivalent to power efficiency. It's a step-change enabling better designs.

Apple and Nvidia both have 5nm and 4nm GPUs. Take those scores and divide it by the TDP, you'll be shocked at the difference design can make.


Please never divide anything by TDP. Use actual power measurements, unless you're trying to ensure your numbers end up being bullshit. (In particular, any number someone claims is a TDP for an Apple processor is made up, because Apple doesn't publish or specify any quantity remotely similar to TDP.)


Okay, then don't divide by TDP. Measure the GPU wattage frame-by-frame and you'll still end up with similar numbers. The point stands.

> because Apple doesn't publish or specify any quantity remotely similar to TDP

1) That doesn't mean that power usage isn't measurable.

2) They actually do, although it's not a perfect breakdown chip-by-chip: https://support.apple.com/en-us/102027


Are you seriously trying to claim that Apple's total system wall power numbers are appropriate for comparing against an AMD or Intel processor TDP number? You really are trying to ensure the numbers you calculate are bullshit.


I think you did not read the context of this discussion. We're talking about GPU power draw, not SOCs, which can be measured on Apple Silicon and compared against third-party raster workloads.

If you think any of my calculations are wrong, please cite them and correct them. GPU-to-GPU, Apple's raster performance is lacking.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: