Well, it makes sense. If you're going with a dedicated graphics card in your laptop, battery life is already out of the window, so you might as well get as much processing power as the thing can handle.
As a proud owner of a laptop that could double as a self-defense weapon to cause massive blunt trauma (and a charger that falls squarely into the same category) I welcome this decision.
I am however considering getting a lighter notebook with longer battery life in the future. Having the power of a full desktop machine in your backpack comes in incredibly handy when you need it but it can get a bit awkward working with it on the train.
The issue isn't so much battery life, as it is heat dissipation. How does that thing handle cranking out that much? Granted the new 1000 series is pretty damn efficient at what it does(my 1070 is amazing for the price), it's still a lot for a laptop.
From the article: more CUDA cores at a lower clock. Since clock scaling isn't linear on power consumption, doubling the cores and halving the clock (as an example, not the actual ratio they used), leaves you with a net efficiency gain.
At factory settings the card draws 120W and pushes ~110fps in their 1440p test, but throttling the power limit down to just 60W only reduced it to ~90fps.
(As an aside, the AMD RX480 comparison shows why people are disappointed with Apple supposedly using AMD Polaris GPUs in the upcoming Macbook Pro refresh)
I just don't understand why Apple seems to prefer AMD. Bad experience with Nvidia's drivers in the old Core Two Duo MBPs? Does AMD have a better track record?
Apple is a backer of and is invested in OpenCL. OS X itself leverages OpenCL throughout the OS (Quicklook for example uses it to make previews faster) and of course FCP/Motion/etc make heavy use of OpenCL as well.
Nvidia cards are capable of OpenCL but they've never performed as well with it as they do with CUDA. AMD has always been the better option for that.
Of course Apple could implement CUDA support in their software, but they've never been big on running with vendor specific standards that they had no part in the development of.
It doesn't matter which cards are better at OpenCL, because it is a legacy technology on Apple platforms, most likely to never be updated beyond the current version 1.2 (latest is 2.2).
Power consumption, optimization for non-DirectX drivers, AMD could meet the parts demand from Apple.
Apple ha(d) really specific requirements for their machines, so I don't doubt the decision was a result of whatever specification being met by AMD and not Nvidia at the time. Nvidia seems pretty happy in cornering the high-end market and can barely keep the 1000 series in stock at the moment.
Alternative options: NVidia is too expensive per unit or most Apple customers could care less about dedicated graphics. I think the latter is most likely - they don't sell their products on specs.
Still, I've worked with a lot of Apple and Dell laptops and they ALL have some type of overheating issues with GPUs. Whether they've solved all these problems with this, who knows. But I'm skeptical.
There's a slightly lower clock speed, but its really going to be up to the OEMs to make this all work from a cooling perspective. I suspect your average gaming laptop is so big and heavy that having another big fan in there is not going to really bother anyone.
Its probably going to be a kludgier solution, but the M series GPUs are fairly terrible, often a fraction of the performance of their desktop equivalants. I think Nvidia saw the external GPU thing on the horizon and decided this is the better approach. I tend to agree. I'd rather have a little extra weight and girth in a laptop than worry about a whole external enclosure and not having it when I'm out, or dealing with all the wires and such.
From a battery perspective, who cares. Even the M series had to be plugged into the wall for any non-trivial GPU work.
They've had external GPU solutions here and there in the past, not quite sure why it didn't catch on more. You get the mobility of a laptop, and then when you're home docked in, you can get the GPU processing power in a separate form factor.
External GPUs didn't catch on earlier because Thunderbolt is the only viable standardized interface for them, and Intel refused to certify eGPU products based on TB1 or TB2.
TB3 is the tipping point where Intel decided there's enough bandwidth to make it a smooth experience, and put their weight behind the idea.
I recently switched from an 18" luggable gaming laptop to a surface book. Powerful enough to replace it (I got the version with an nvidia GPU in the base that it switches to dynamically), but small and light and good battery life too.
As a proud owner of a laptop that could double as a self-defense weapon to cause massive blunt trauma (and a charger that falls squarely into the same category) I welcome this decision.
I am however considering getting a lighter notebook with longer battery life in the future. Having the power of a full desktop machine in your backpack comes in incredibly handy when you need it but it can get a bit awkward working with it on the train.