I may not have said it well, but I broadly agree with you. If a workload needs high performance but not consistently (e.g. because you're doing serial tests by swapping bitstreams), predictably (e.g. because you need flexibility for network stuff you can't predict at design time), or with enough volume (e.g. costs in the low millions are prohibitive), an ASIC isn't the right solution.
But my point is that for FPGAs to come to prominence as a major computation paradigm, it probably won't be because it outperforms GPU on one really big workload like bitcoin or genetic analysis or something. It'll have to be a moderately large number of medium scale workloads.
But my point is that for FPGAs to come to prominence as a major computation paradigm, it probably won't be because it outperforms GPU on one really big workload like bitcoin or genetic analysis or something. It'll have to be a moderately large number of medium scale workloads.