I don't know why people build houses with nail guns, I like my hammer... Whats the point of building a house if you're not going to pound the nails in yourself.
AI tooling is great at getting all the boiler plate and bootstrapping out of the way... One still has to have a thoughtful design for a solution, to leave those gaps where you see things evolving rather than writing something so concrete that you're scrapping it to add new features.
You can pick apart a nail gun and see how it exactly works pretty easily. You cant do that with LLMs. Also a nail gun doesn't get less accurate the more nails you shoot one after another, a LLM does get less accurate the more steps it goes through. Also a nail gun shoots straight and not in random directions as that would be considered dangerous. A LLM does shoot into random directions. The same prompt will often yield different results. With a nail gun you can easily pull the plug and you wont have to verify if the nail got placed correctly for an unreasonable amount of time, with LLM output you have to verify everything which takes a lot of time. If an LLM really is such a great tool for you I fear you are not verifying everything it does.
If the boilerplate is that obvious why not just have a blueprint for that and copy and paste it over using a parrot?
Also I dont have a nail gun subscription and the nail gun vendor doesnt get to see what I am doing with it.
You mention a thousand ways the analogy breaks when you take it too far, but you didn't address the actual (correct) point the analogy was making: Some people don't enjoy certain parts of the creative process, and let an LLM handle them. That's all.
> Some people don't enjoy certain parts of the creative process,
Sure
> and let an LLM handle them.
This is probably the disputed part. It is not a different way of development, and as such it should not be presented like that. In software, we can use ready-made components, choose between different strategies, build everything in a low-level language etc. The trade-offs coming with each choice is in principle knowable; the developer is still in control.
LLMs are nothing like that. Using a LLM is more akin to management of outsource software development. On the surface, it might look like you get ready-made components by outsourcing it to them, but there is no contract about any standard, so you have to check everything.
Now if people would present it like "I rather manage an outsourcing process than doing the creative thing" we would have no discussion. But hammers and nails aren't the right analogies.
>LLMs are nothing like that. Using a LLM is more akin to management of outsource software development.
You're going to have to tell us your definition of 'Using a LLM' because it is not akin to outsourcing (As I use it).
When I use clause, I tell it the architecture, the libraries, the data flows, everything. It just puts the code down which is the boring part and happens fast.
The time is spent mostly on testing, finding edge cases. The exact same thing if I wrote it all myself.
> 'Using a LLM' because it is not akin to outsourcing (As I use it).
The things you do with an LLM are precisely what many other IT-firms do when outsourcing to India. Now you might say that this would be bonkers, but that is also why you hear so often that LLM's are the biggest threat to outsourcing instead of software development in general. The feedback cycle with an LLM is much faster.
> I don't see how this is hard for people to grasp?
I think I understand you, and I think you have/had something else in mind when hearing the term outsourcing.
I don't think people use an LLM and say "I wrote some code", but they do say "I made a thing", which is true. Even if I use an LLM to make a library, and I decide the interfaces, abstractions, and algorithms, it was still me who did all that.
> Using a LLM is more akin to management of outsource software development.
This is a straw man argument. You have described one potential way to use an LLM and presented it as the only possible way. Even people who use LLMs will agree with you that your weak argument is easy to cut down.
You can't stretch it until it breaks and then say "see? It broke, it wasn't perfect". It works for the purpose it was made, and that's all it needed to work for.
This appears to misunderstand both construction and software development, nail guns and LLMs are not remotely parallel.
You’re comparing a deterministic method of quickly installing a fastener with something that nondeterministically designs and builds the whole building.
Nail guns are great. For nails that fit into them and spaces they fit into. But if you can't hit a nail with a hammer, you're limited to the sort of tasks that can be accomplished with the nail guns and gun-nails you have with you.
That's the problem with solving a casually made metaphor instead of sticking to the original question. Since when is AI assisted coding only when you do 100% AI and not a single line yourself? That is only the extreme end! Same with the nails actually. I doubt the builders don't also have and use hammers.
> This is the way with many labor-saving devices.
I think that's more the problem of people using only the extremes to build an argument.
The problem is that there is all this capital and no place to put it, so yes it seems circular, but some of that is to be expected.
As for Burry, he recently called out the changes to how the big players are amortizing their capital expenses for all these data center build outs. He is correct in calling it out, but he's getting the wrong signal from it. Mores law died a long time ago, and now were basically hitting multiple walls at the same time: Node scaling at the chip fabs, power and cooling in the data center, and just more linear growth from product (because of all three factors).
Go back to 2008 ish time period. There were a lot of data centers that hit the wall with availability of power and cooling and they were hard problems to solve then. The solution was not to upgrade rather to "build new", and were seeing a lot of the same types of issues today.
Nvidia has unmaintainable margins, the memory manufacturing side is now in on the grift too... They are sucking up the profit while they can because the dip is going to be BRUTAL (likely a boon to consumers but neither here nor there).
The general concept of the velocity of money is not what I'm talking about, it's specifically about vendors buying equity in their users who then buy the vendors' goods, in a tight circular fashion. See the other comments for more.
I can still make a book like that in my basement. People do this as a hobby now. You can still build chips like that in your garage. People do this as a hobby now.
These things DO NOT SCALE... you cant have 10,000 people running printing presses in their basement to crank out the NYT every day. A modern chip fab has more in common with the printer for the NYT than it does with what you can crank out in your garage.
Let's look at TSMC's plant in AZ. They went and asked intel "hey where are you sourcing your sulfuric acid from. When they looked at the American vendors TSMC asked intel "how are you working with this". Intels response was that it was the best they could get.
It was not.
TSMC now imports sulfuric acid from Taiwan, because it needs to be outrageously pure. Intel is doing the same.
Every single part, component, step and setup in the chain is like that. There is so much arcane knowledge that loss of workers represents a serious set back. There are people in the production chain, with PHD's, who are literally training their successors because thats sort of the only option.
Do you know who has been trying the approach you are proposing? China. It has not worked.
Complexity of the fab processes is isn't what the parent was talking about. They're talking about the major changes in the relationship between fabless semiconductor companies and commercial foundries.
The complexity of actual fabrication was always, and still is, entirely within the foundry. But in the early days of that model, designs could be more easily handed off at the logical level, leaving the physical design to back end companies, which makes designs much more portable between foundries. (The publisher analogy.) What's changed is that the complexity of physical design has exploded, and you can't make the handoff at nearly as high a level, and there is much more work that depends directly on the specific process you are targeting. Much more work at the physical level falls to the fabless semi companies. So it is much more work to retarget a design to a different foundry or process.
> I can still make a book like that in my basement. People do this as a hobby now. You can still build chips like that in your garage. People do this as a hobby now.
You can absolutely manufacture a convincingly-professional, current-generation book in your basement with a practically-small capital investment.
You cannot manufacture a convincingly-professional chip (being generous: feature size and process technology from the last two decades) in your basement without a 6-7 figure capital expenditure, and even then - good luck.
Gibson was writing about California specifically, and the Bay Area specifically. That state and that part of it had already had, since the 1960s at least, a reputation for attracting homeless people from across the country thanks to its clement weather. He could have merely been extrapolating from that and not necessarily prophetic about any of the issues today.
We dont build high density housing. We killed off the boarding house. There's like one left in DC when there used to be dozens... They were common enough that even in the 80's you could make a tv show about it, now if you said bording house someone would look at you like you had 9 heads.
We dont have SRO's any more... In 1940 the YMCA of New York had 100k rooms for rent...
> If you built 400 condos, 1600 more rich people move in. Supply is not the issue as far as I can see it.
Do you know what the largest predictor of voting is? Home ownership. DO you know what drives home owners to the polls more than anything else? Protecting the value of their home.
The state has, and continues to sue towns for the fuckery that they have been doing to block housing development to prop up property prices. 60 percent of people who are the most likely to vote will turn up to the polls to make sure the costs do NOT go down. It is the tyranny of majority...
SO yes there are plenty of HOUSES, and not enough of everything else that we need for people to live.
> Just how sure are we the AI bubble is the entire reason for these absurd prices?
We're not, and market dictates that they don't have to talk to know to jack up the prices.
This ram price spike is leading Nvidia reporting for this quarter: gross margins were 70 percent. It's looking like their year over year increase in margins (double) is not because it came anywhere close to shipping double the number of units.
Meanwhile if you look at Micron their gross margin was 41% for fiscal year 2025, and 2024 looks to be 24%.
Micron and its peers, are competing with Nvidia for shareholder dollars (the CEO's real customer). Them jacking up prices is because there is enough of the market, dumb enough, to bear it right this second. And every CEO has to be looking at those numbers and thinking the same things: "Where is my cut of the pie, why aren't we at 60 percent".
We're now at a point where hardware costs are going to inhibit development. Everyone short of the biggest players are now locked out, and thats not sustainable. Of the AI ventures there is only one that seems to have a reasonable product, and possibly reasonable financials. Many of the other players are likely going to be able to weather the write downs.
The sane answer to monopolists abusing the system is not "we should just throw out all regulations and standards". It's "we should insist the government enforce the antitrust laws already on the books against these behemoths, rather than giving them deference".
One of things actually works on the scale of a useful human lifetime. The other does not, and currently is headed in the opposite direction. If things ever get fixed it will be my great great grandchildren that see the results.
I don’t know if crypto is the solution, but watching even supposedly technologically literate places like HN largely cheer for cashless societies where every transaction is monitored and gatekept, while limits continue to be dialed downwards is all anyone needs to see to realize working within the system is a plan for failure.
Transacting money is not a crime, it’s simply a lazy way to fight it. It enables enforcement of trivial policies and more or less removes a key pillar of human freedom.
> One of things actually works on the scale of a useful human lifetime.
One of those things actually works, period.
Throwing out the baby with the bathwater is not going to lead to a better system overall. It may fix a few of the specific problems with the current system, but it introduces enough other problems that it becomes, at best, a wash.
And in practice, it's abundantly clear that cryptocurrency does not work. People don't want it. The only reason it is experiencing a resurgence in popularity now is because Trump, under the influence of various cryptocurrency scammers (note: this is not me saying "all cryptocurrency is a scam"; this is me saying "the specific people influencing Trump on this are scammers"), has been promoting it in order to feed their scams.
Some of us can recognize that the fascist takeover of our specific government doesn't change the fundamental truths about how governments function.
Yes, antitrust enforcement in the US is a joke right now. It was actually ramping up to be pretty darn good under Biden. Now it's not, and we have several other problems to solve before we're going to be able to get to it.
None of that makes cryptocurrency a better overall answer to the problems described than fiat currency with proper antitrust controls. Frankly, it's not even a better overall answer to the problems described than fiat currency without proper antitrust controls.
I can't speak for the author, but they said they have a Coral TPU passed into the LXC & container, which I also have on my Proxmox setup for Frigate NVR.
Depending on your hardware platform, there could be valid reasons why you wouldn't want to run Frigate NVR in a VM. Frigate NVR it works best when it can leverage the GPU for video transcoding and TPU for object detection. If you pass the GPU to the VM, then the Proxmox host no longer has video output (without a secondary GPU).
Unless you have an Intel Arc iGPU, Intel Arc B50/B60, or fancy server GPU, you won't have SR-IOV on your system, and that means you have to pass the entire GPU into the VM. This is a non-starter for systems where there is no extra PCIe slot for a graphics card, such as the many power-efficient Intel N100 systems that do a good job running Frigate.
The reason why you'd put Docker into LXC is that's the best supported way to get docker engine working on Proxmox without a VM. You'd want to do it on Proxmox because it brings other benefits like a familiar interface, clustering, Proxmox Backup Server, and a great community. You'd want to run Frigate NVR within Docker because it is the best supported way to run it.
At least, this was the case in Proxmox 8. I haven't checked what advancements in Proxmox 9 may have changed this.
> Unless you have an Intel Arc iGPU, Intel Arc B50/B60, or fancy server GPU, you won't have SR-IOV on your system, and that means you have to pass the entire GPU into the VM.
This is changing, specifically on QEMU with virtio-gpu, virgl, and Venus.
Virgl exposes a virtualized GPU in the guest that serializes OpenGL commands and sends them to the host for rendering. Venus is similar, but exposes Vulkan in the guest. Both of these work without dedicating the host GPU to the guest, it gives mediated access to the GPU without any specific hardware.
There's also another path known as vDRM/host native context that proxies the direct rendering manager (DRM) uAPI from the guest to the host over virtio-gpu, which allows the guest to use the native mesa driver for lower overhead compared to virgl/Venus. This does, however, require a small amount of code to support per driver in virglrenderer. There are patches that have been on the QEMU mailing list to add this since earlier this year, while crosvm already supports it.
To add to this, while I haven’t used it yet myself (busy with too many other projects), this gist has the clearest and most up to date instructions on setting up QEMU with virglrenderer that I’ve found so far, with discussion on current issues: https://gist.github.com/peppergrayxyz/fdc9042760273d137dddd3...
I have Frigate and a Coral USB running happily in a VM on an N97. GPU pass through is slightly annoying (need to use a custom ROM from here: https://github.com/LongQT-sea/intel-igpu-passthru). I think SRIOV works but haven’t tried. And Coral only works in USB3 mode if you pass the whole PCIe controller.
I've been debating if I should move my frigate off an aging Unraid server to spare mini PC with Proxmox. The mini has a N97 with 16gb ram. How cameras do you have in your frigate instance on that N97? Just wondering if a N97 is capable of handling 4+ cameras. I do have a Coral TPU for inference & detection.
I have around 6 cameras, mostly 1080p, and about 8 GB RAM and 3 cores on the VM (plus Coral USB and Intel VAAPI). CPU usage is about 30 - 70% depending on how much activity there is. I also have other VMs on the machine running container services and misc stuff.
There are some camera stability issues which are probably WiFi related (2.4 GHz is overloaded) and Frigate also has its own issues (e.g. with detecting static objects as moving) but generally I’m happy with it. If I optimize my setup some more I could probably get it to a < 50% utilization.
Perfect thanks. I'll give the N97 a go and put it to good use as a dedicated frigate NVR box. It certainly has a much lower power draw than my Unraid server.
At first I had the unholy abomination that is Frigate LXC container, but since it's not trivially updatable and breaks other subtle things, I ended up going with Docker. Was debating getting it into a VM, but for most part, docker on LXC only gave me solvable problems.
It's not always better. Docker on lxc has a lot of advantages. I would rather use plain lxc on production systems, but I've been homelabbing on lxc+docker for years.
It's blazing fast and I cut down around 60% of my RAM consumption. It's easy to manage, boots instantly, allows for more elastic separation while still using docker and/or k8s. I love that it allows me to keep using Proxmox Backup Server.
I'm postponing homelab upgrade for a few years thanks to that.
> While it can be convenient to run “Application Containers” directly as Proxmox Containers, doing so is currently a tech preview. For use cases requiring container orchestration or live migration, it is still recommended to run them inside a Proxmox QEMU virtual machine.
The way I understand it is that Docker with LXC allows for compute / resource sharing, where as dedicated VMs will will require passing through the entire discrete GPU. So, the VMs require a total passthrough of those Zigbees, container wouldn't?
I'm not exactly sure how the outcome would have changed here though.
It should in an ideal world but docker is a very leaky abstraction imho and you will run into a number of problems.
It has improved as of newer kernel and docker versions but they were problems (overlayfs/zfs incompatibilities/ uid mapping problems in docker images/ capabilities requested by docker not available in LXC, rootless docker problems,...)
I'm not sure I would want my daily driver to be a hypervisor... Whats controlling audio, do I really need audio kernel extensions on my hypervisor? Whos in charge when I shut the lid on my laptop...
But the moment you stop trying to do everything locally Proxmox, as it is today, is a dream.
It's easy enough to spin up a VM, throw a clients docker/podman + other insanity onto it and have a running dev instance in minutes. It's easy enough to work remotely in your favorite IDE/dev env. DO I need to "try something wild", clone it... build a new one... back it up and restore if it doesn't work...
Do I need to emulate production at a more fine grained level than what docker can provide: easy enough to build something that looks like production on my proxmox box.
And when I'm done with all that work... my daily driver laptop and desktop remain free of cruft and clutter.
I don't know why people build houses with nail guns, I like my hammer... Whats the point of building a house if you're not going to pound the nails in yourself.
AI tooling is great at getting all the boiler plate and bootstrapping out of the way... One still has to have a thoughtful design for a solution, to leave those gaps where you see things evolving rather than writing something so concrete that you're scrapping it to add new features.
reply