Reminder that a specialized AI chip designed to just maximize the transformer architecture would only require decades-old tech that's well spread out at this point and far simpler to manufacture. Basically as many XOR matrix multiplications as you can squeeze in on a chip.
NVDA may yet see some competition, as we drop our complexity standards and embrace a one-size-fits-all general MLLM architecture.
NVDA may yet see some competition, as we drop our complexity standards and embrace a one-size-fits-all general MLLM architecture.