The merits of this particular proposal aside, it's tactically important to get the ideas out there and build consensus about "where we want to get to."
Otherwise you're ceding control of the Overton window to the folks aiming for techno-serfdom.
I understand the need to seed future debates early.
My hesitation comes from the fact that most proposals implicitly assume
a “fixed physical capability” for AI systems — something we don’t actually have yet.
In practice, social impact won’t be determined by abstractions but by
power budgets, GPU throughput, reliability of autonomous systems,
and years of real-world operation.
If scaling hits physical or economic limits, the eventual policy debate may look more like
progressive taxation on high-wattage compute or specialized hardware
than anything being discussed today.
And if fully automated systems ever run safely for several consecutive years,
that would still be early enough for the Overton window to shift.
I’m not dismissing long-term thinking.
I’m pointing out the opportunity cost:
attention spent on hypothetical futures tends to displace attention from
problems that exist right now.
That tradeoff rarely appears in the discussion.
So for me it’s just a question of balance —
how much time we allocate to tomorrow’s world versus today’s neighborhood.
From my own vantage point, the future talk feels disproportionately dominant,
so the T-1000 analogy came naturally.
I think "tax AI" makes as little sense as "taxing Jacquard looms" or "taxing robot factory-arms"... Which are all part of a long-term trend, and attention to that trend is overdue, rather than premature.
Would you be comfortable giving that answer to someone who’s homeless or financially stuck today?
I wouldn’t — and that’s the whole point.
We talk about tomorrow far more than we talk about what’s happening right in front of us.
Quantum computing was ‘just around the corner.’ It wasn’t.
Fusion was ‘imminent.’ Still isn’t.
I never argued we shouldn’t discuss the future.
I said it’s a matter of balance — something I already stated explicitly.
> Would you be comfortable giving that answer to someone who’s homeless or financially stuck today?
What? Why on earth wouldn't I be comfortable talking to people already getting the short end of the economic-stick about how the system has been in need of reform for many years?
If anything, I think you've got it backwards: Good luck convincing them that "we should probably let actual full automation happen" before debating what we want to do about it.
I’m talking about balance.
Attention is finite.
If someone is homeless or struggling, which do you think is more immediately useful to them:
food, or a debate about future taxation frameworks?
The obvious answer is both, but in the right proportion.
That’s the entire point I’ve been making from the start.
If you’re proposing that future-policy talk should take precedence for them, I’m not sure how that adds up.
Otherwise you're ceding control of the Overton window to the folks aiming for techno-serfdom.