I imagine FPGA could just be part of general CPU that provides user space APIs to program them to accelerate certain work flow, in other words, this sounds like exactly JIT to me. People may program FPGA as they need to, e.g. AV1 encoder/decoder, accelerate some NN layers, or even a JS runtime, am I thinking something too wild for hardware capability or is it just the ecosystem isn't there yet to allow such flexible use cases?
Digital logic design isn't software programming, and today's FPGAs are for most intents and purposes 'single-configuration-at-a-time' devices - you can't realistically time-slice them.
The placement and routing flow of these devices is an NP-Complete problem and is relatively non-deterministic* (the exact same HDL will typically produce identical results, but even slightly different HDL can produce radically different results.)
All of these use cases you've mentioned (AV1 decoders, NN layers, but especially a JS runtime) require phenomenal amounts of physical die area, even on modern processes. CPUs will run circles around the practical die area you can afford to spare - at massively higher clock speeds - for all but the most niche of problems.
My rule of thumb is a 40x silicon area ratio between FPGA and ASIC, a clock speed that is around 5x lower. And a lot more power consumption.
If you have an application that can be done on a CPU, with lots of sequences dependencies (such as video compression/decompression), an FPGA doesn’t stand a chance compared to adding dedicated silicon area.
That’s even more so if you’d embed an FPGA on a CPU die. Intel tried it and you got a power hungry jack of all trades, master of none that nobody knew what to do with.
Xilinx MPSOC and RFSOC are successful, but their CPUs are comparatively lower performance and used as application specific orchestrators and never as a generic CPU that run traditional desktop or server software.
I have yet to see the "FPGA is less power efficient" thing to be true. People are always comparing the same circuit in an FPGA Vs an ASIC but this is a nonsensical comparison because of three reasons:
FPGAs have hard wired blocks like DSPs which do not have any power disadvantages vs "ASIC" (only advantages actually)
the likelihood that there is an ASIC that happens to implement your particular design is very low
and off the shelf ASICs like GPUs and CPUs have significant amounts of overhead for each operation. This is especially evident with CPUs. They perform a small number of operations per cycle, but they have to pay the entire fixed energy cost of caches, registers, instruction decoding, etc per cycle. This is way way worse than programmable logic if you're mostly using the DSP and the block RAM slices.
sure there are hype around MCP, but how does that bridges to "MCP Is Mostly Bullshit"?
the title is more or less click bait, even author admits it
> I am not even saying MCP is bad tech or useless. It’s just one way among others to provide context to AI assistants/agents. If you have ever built an LLM-based application, you have more or less done something similar.
but beyond just similar, it's very good to see there is a standard protocol that everyone adopts it, and the MCP will make the existing tools & services immediate ready with better models being released in the future much easier
It is really fun and sarcastic to watch all this happening. U.S. Gov. tried to block China from accessing GPU resources very hardly to stop their AI development, but actually helped China to take a leap on developing efficient and more cost-effective LLM model with constraint GPU access.
And then "China" (which is actually a bunch of super generous folks at DeepSeek) decides to release it all back to the US under a permissive MIT license.
They could've just exposed an API and kept the model to themselves but they didn't!
They could've not published their research paper, but they did, again and again - and each time they publish they discuss not just the techniques that DO work, but those that don't - saving researchers everywhere from loads of dead ends.
That is pure awesome. Thank you DeepSeek engineers for your gift to humanity.
Do they have models that try to downplay what happened on Tiananmen Square? That would be a sneaky way to shape our future in some way (and no whataboutism, we do it too).
No human is in danger of forgetting Tiananmen Square unless they didn’t know about it in the first place. Details are strewn across the Internet and in book-libraries all over the world. New generations of students and interested kids can easily learn about them.
Additionally it has been shown that making models forget things lobotomizes them, so no SOTA model can ever do that and be SOTA. They might be post-trained into pretending not to know, but the technology fundamentally cannot resist jailbreaking.
Do you have examples of knowledge that has actually become at risk as a result of this one AI model being added to the pile??
I doubt you ur GPU sanctions have had much of an influence one way or the other. They can get their resources from third countries even if they can’t get them directly from the USA. I wonder if the USA will eventually try to lock down higher end NVIDIA GPUs and prevent export all together.
from Company perspective, I think the answer is yes
but from individual perspective I don't think that is the case. Since AlphaGo first time was released and beat world-class players, have all these players gone? not really, but it even promotes more people study Go with AI instead.
As a software engineer myself, am I enjoy using AI for coding? yes, for many trivial and repetitive works, but do I want it to take my coding works fully away such that I can just type natural languages? My answer is no. I still need the dopamine hit from coding myself, either for work or my for hobby time, even I am rebuilding some wheels other folks have already built, and I believe many of ours are the same.
The guy that got beaten literally decided to retire immediately after explicitly because AI displaced him
> On 19 November 2019, Lee announced his retirement from professional play, stating that he could never be the top overall player of Go due to the increasing dominance of AI.
I do get your point though that overall player count is still fine
I can imagine how depress Lee felt when being beaten by AI the first time, but looking at the bright side, we see Shin Jin-seo as the rising star in the AI-era of Go by leveraing AI to help him training.
if U.S. could have opened its market to Chinese EVs (yes yes I know Chinese Gov subsidizes those vendors a lot, but U.S. can still do punitive tariffs, instead of just ban[1], and of course, making excuses out of national security, meanwhile Tesla is doing okay in China) these U.S. vendors have no competitions is the reason why they are far behind.
There are a few Chinese EVs available widely in Australia (BYD, MG, and recently Zeekr) that all conform to Australian Design Rules (ADR), which have been "harmonised" with the European ones for a while. So, most likely this would get the vehicles pretty close to passing the US safety rules, though they'd still have to go through testing and certification.
> I did not "claim" or "state" from myself directly, I said "recognized as malware" (as per automated analysis from sites), let's learn properly understand the things we read, before "hater expert readers" jump into conclusions. Because the questions I was asking is not "is this malware or not", the questions I did ask across multiple occasions and we should all ask, and some intelligence individuals did just that
Hey, hackernews readers aren't stupid, we all know what communication effect (click-bait) it brings when you put "Chinese" and "Malware" together in title.