Yes the idea of property taxes is NOT to tax based on increase in wealth, just the current amount of wealth. And it serves a good purpose in municipalities who have to make sure infrastructure such as roads power and sewage systems are paid somehow. More expensive houses typically require more of those.
I would go further and ask how does a person who is unable to work survive in our current society? Should we let them die of hunger? Send them to Equador? Of course not, only nazis would propose such a solution.
But, we as humans still have a need to understand the outputs of AI. We can't delegate this understanding task to AI because then we wouldn't understand AI and thus we could not CONTROL what the AI is doing, optimize its behavior so it maximizes our benefit.
Therefore, I still see a need for highlevel and even higher level languages, but ones which are easy for humans to understand. AI can help of course but challenge is how can we unambiguously communicate with machines, and express our ideas concisely and understandably for both us and for the machines.
The way I see AI coding-agents at the moment is they are interns. You wouldn't give an intern responsibility for the whole project. You need an experienced developer who COULD do the job with some help from interns, but now the AI can be the intern.
There's an old saying "Fire is a good servant, but bad master". I think same applies to AI. In "vibe-coding" AI is too much the master.
But it's the amount and location(?) of the vibes that matters.
If I want to say, create a Youtube RSS hydrator that uses DeArrow to de-clickbait all URLs before they hit my RSS reader.
Level 1 (max vibe) I can either just say that to an LLM hit "go" and hope for the best (maximum vibes on spec and code). Most likely gonna be shit. Might work too.
Level 2 (pair-vibing the spec) is me pair-vibing the spec with an LLM, web versions might work if they can access sites for specs (figuring out how to turn a youtube URL to an RSS feed and how the DeArrow API works)
After the spec is done, I can give it to an agent and go do something else. In most cases there's an MVP done when I come back, depending on how easy said thing is to test (RSS/Atom is a fickle spec and readers implement it in various ways) automatically.
Level 3 continues the pair-vibed spec with pair-coding. I give the agent tasks in small parts and follow along as it progresses, interrupting if it strays.
For most senior folks with experience in writing specs for non-seniors, Level 2 will produce good enough stuff for personal use. And because you offload the time consuming bits to an agent, you can do multiple projects in parallel.
Level 3 will definitely bring the best results, but you can only progress one task at a time.
What's the main diff between the different repos? I would think whoever keeps the repo free of malicious code is the best. A big player like GH should have an advantage on that. Also not intentional malicious code uploads, but vulnerable code should be detected and reported to tyhe submitters.
Generally if you are paying full price (paying per token), then it's not used for training.
If you are not paying, or paying a consumer level price ($20/mo) you will be trained on.
ETA: In the terms they say they use your data because "free" is the only option available in preview. However it does say you can disable sharing in your settings...
Not but you can be quite sure somewhere deep inside the TOS there is a line saying their telemetry swallow your soul. If not, it will be added. It's google, that's what they do.
I think the challenge is how to create a small but evolvable spec.
What LLMs bring to the picture is that "spec" is high-level coding. In normal coding you start by writing small functions then verify that they work. Similarly LLMs should perhaps be given small specs to start with, then add more functions/features to the spec incrementally. Would that work?
Thanks! With Spec-Kit and Claude Sonnet 4.5, it wanted to design the whole prod-ready CLI up front. It was hard, if not impossible, to try to scope it to just a single feature or POC. This is what I struggled with most.
Were I to try again, I'd do a lot more manual spec writing or even template rewrites. I expected it to work more-or-less out-of-the-box. Maybe it would've for a standard web app using a popular framework.
It was also difficult to know where one "spec" ended and the next began; should I iterate on the existing one or create a new spec? This might be a solved problem in other SDD frameworks besides Spec-Kit, or else I'm just over thinking it!
> LLMs should not be seen as knowledge engines but as confidence engines.
The thing I like best about LLM is when I ask question about some technical problem, and it tells that it is a KNOWN problem. It thus gives me confiidence that I don't need to spend time) to look for solution where there is no good soloution. Just go around it somehow. It let's me know I'm not the only person with this problem. And that way it gives me confidence that I'm not stupid, the problem is a real problem.
As an example I was working with WebStorm and tried to find a way to make the Threads-tab the default tab shown when debugger opens. AI told me there is no way it knows about. Good, problem solved, solved by finding out there is no solution.
This is the kind of stuff AI lies about all the time. I can get it to tell me "That is some good insight, and is a known issue..." with things I make up out of thin air.
Be careful. The models easily hallucinate problems and misdiagnose. For example, I had an issue with some GPU code, and it assured me, with utter conviction that my problem was caused by some subtle race condition ('a known issue') that the model described in great when the real issue was just a trivial typo - no race condition, no subtly or complexity.
reply