Hacker Newsnew | past | comments | ask | show | jobs | submit | amirhirsch's commentslogin

I did write this 20 years ago https://fpgacomputing.blogspot.com/2006/05/methods-for-recon...

The vendor tools are still a barrier to the high-end FPGA's hardened IP


python won because of enforced whitespace. It solved a social problem that other languages punted to linters, baking readability into the spec


The effect of these tools is people losing their software jobs (down 35% since 2020). Unemployed devs aren’t clamoring to go use AI on OSS.


Wasn't most of that caused by that one change in 2022 to how R&D expenses are depreciated, thus making R&D expenses (like retaining dev staff) less financially attractive?

Context: This news story https://news.ycombinator.com/item?id=44180533


Yes! Even though it's only a tax rule for USA, it somehow applied for the whole world! Thats how mighty the US is!

Or could it be, after the growth and build, we are in maintenance mode and we need less people?

Just food for thought


Yes, because US big tech have regional offices in loads of other countries too, fired loads of those developers at the same time and so the US job market collapse affected everyone.

And since then there's been a constant doom and gloom narrative even before AI started.


Probably also end of ZIRP and some “AI washing” to give the illusion of progress


Same thing happened to farmers during the industrial revolution, same thing happened to horse drawn carriage drivers, same thing happened to accountants when Excel came along, mathmaticins, and on and on the list goes. Just part of human peogress.


I keep asking chatgpt when will LLM reach 95% software creation automation, answer is ten years.


I don't think that long, but yeah, I give it five years.

Two years and 3/4 will be not needed anymore


I don't know, I go back and forth a bit. The thing that makes me skeptical is this: where is the training data that contains the experiences and thought processes that senior developers, architects, and engineering managers go through to gain the insight they hold?


I don't have all the variables in (financials of openai debt etc) but a few articles mention that they leverage part of their work to {claude,gemini,chatgpt} code agents internally with good results. it's a first step in a singularity like ramp up.

People think they'll have jobs maintaining AI output but i don't see how maintaining is that harder than creating for a llm able to digest requirements and codebase and iterate until a working source runs.


I don't think either, people forget that agents are also developing.

Back then, we put all the source code into AI to create things, then we manually put files into context, now it looks for needed files on their own. I think we can do even better by letting AI create a file and API documentation and only read the file when really needed. And select the API and documentation it needs and I bet there is more possible, including skills and MCP on top.

So, not only LLMs are getting better, but also the software using it.


Cool!

Constraint propagation from SICP is a great reference here:

https://sicp.sourceacademy.org/chapters/3.3.5.html


I wasn't aware of this chapter, but I did use constraint propagation for the solver (among other things), thanks!


# Tell the driver to completely ignore the NVLINK and it should allow the GPUs to initialise independently over PCIe !!!! This took a week of work to find, thanks Reddit!

I needed this info, thanks for putting it up. Can this really be an issue for every data center?


Doesn’t this prevent the GPUs from talking to each other over the high speed link?


I'll find out soon, but without this hack, the GPUs are non-functional.


I implemented a PDP-11 in 2007-10 and I can still read PDP-11 Octal


I am so into optimizing fast polynomial multiplication I assure you there is nothing that will demoralize me from creating a slightly more optimized version.


The phenomenon described in movies has a name called “Idiot Plot” (https://en.wikipedia.org/wiki/Idiot_plot) an older term which Roger Ebert popularized. Feels missing from blogpost.


But people still love movies where everyone is an idiot, like Jurrasic Park or Interstellar. I wonder if it then translates in their real life decisions.


I haven't seen Interstellar, but to be fair to Jurassic Park, there's literally a character who tells everyone else that the park is a terrible idea, even if his "scientific" basis for it isn't very coherent. (He might still be an idiot in other ways; I haven't seen it in a while, but I think it's an overstatement to say it's about everyone being an idiot rather than some specific people with enough money to find enough other idiots to execute their vision).


Reminds me of the Oceangate disaster as proof that things at least that dumb can happen in real live involving substantial amounts of money. Including everyone who is actually an expert in the field telling them that this is idiotic and usually quitting.


I think the main crux is whether or not characters are believable idiots. I loved the first jurrasic park because while yes people are idiots, they are idiots for reasons people in real life are idiots too. It's well-motivated idiocy.

I loathe the latest jurrasic park. There's no way a band of experienced mercenaries are that incompetent.


What if the goal of writing about how “AI is bad for the environment” (because of the energy and water it uses) is to identify gullible people and on-ramp them into a lifetime of media manipulation?


OTOH, what if the goal of downplaying the environmental risks is to try to make people gullible and stop caring and spend more of their money now and ignore the consequences, as industrialization has been doing for a couple of centuries?


This was done on the re-captcha demo page no invisible fingerprinting, behavioral test, or user classification.


Ah yea probably not a good test then. Good point


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: