Claude is absolutely plastering Facebook with this bullshit.
Every PR Claude makes needs to be reviewed. Every single one. So great! You have 10 instances of Claude doing things. Great! You're still going to need to do 10 reviews.
It's interesting to see this sentiment, given there are literal dozens of people I know in person who have no affiliations with Anthropic, living in Tokyo, and rave about Claude Code. It is good. Not perfect, but it does a lot of good stuff that we couldn't do before because of time restrictions.
I am surprised by how many people don't know that Claude Code is an excellent product. Nevertheless, PR / influencer astroturfing makes me not want to use a product, which is why I use Claude in the first place and not any OpenAi products.
It is an excellent product but the narrative being pushed is that there's something unique about Claude Code, as if ChatGPT or Gemini don't have exactly the same thing.
Even having Opus review code written by Opus works very well as a first pass. I typically have it run a sub-agent to review its own code using a separate prompt. The sub-agents gets fresh context, so it won't get "poisoned" by the top level contexts justifications for the questionable choices it might have made. The prompts then direct the top level instance to repeat the verification step until the sub-agent gives the code a "pass", and fix any issues flagged.
The result is change sets that still need review - and fixes - but are vastly cleaner than if you review the first output.
Doing runs with other models entirely is also good - they will often identify different issues - but you can get far with sub-agents and different persona (and you can, if you like, have Claude Code use a sub agent to run codex to prompt it for a review, or vice versa - a number of the CLI tools seems to have "standardized" on "-p <prompt>" to ask a question on the command line)
Basically, reviewing output from Claude (or Codex, or any model) that hasn't been through multiple automated review passes by a model first is a waste of time - it's like reviewing the first draft from a slightly sloppy and overly self-confident developer who hasn't bothered checking if their own work even compiles first.
> Basically, reviewing output from Claude (or Codex, or any model) that hasn't been through multiple automated review passes by a model first is a waste of time - it's like reviewing the first draft from a slightly sloppy and overly self-confident developer who hasn't bothered checking if their own work even compiles first.
Well, that's what the CI is for. :)
In any case, it seems like a good idea to also feed the output of compiler errors and warnings and the linter back to your coding agent.
Sure, but I'd prefer to catch it before that, not least because it's a simpler feedback loop to ensure Claude fixes its own messes.
> In any case, it seems like a good idea to also feed the output of compiler errors and warnings and the linter back to your coding agent.
Claude seems to "love" to use linters and error messages if it's given the chance and/or the project structure hints at an ecosystem where certain tools are usually available. But just e.g. listing by name a set of commands it can use to check things in CLAUDE.md will often be enough to have it run it aggressively.
If not enough, you can use hooks to either force it, or sternly remind it after every file edit, or e.g. before it attempts to git commit.
Exactly. IPv6 was developed in the ivory towers where it was still assumed that everyone wanted to be a full participant of the internet.
But the social/political environment was that everyone just wants to be a passive consumer, paying monthly fees to centralized hosts to spoon-feed them content through an algorithm. For that, everyone being stuck behind IPv4 CG-NAT and not being able to host anything without being gatekept by the cloud providers is actually a feature and not a bug.
We've seen only the world where everything has been adopted to IPv4. p2p technologies strive even under it, but they could really shine with the ability to connect directly between devices. Imagine BitTorrent on steroids, where you don't have peers with assigned IPv4 and seedboxes and everybody else. Torrents are generally faster than usual channels to download things, but with ipv6 it would be far faster than now.
Cloudless cameras streaming to your phone without Chinese vendor clouds, e2e encrypted emails running on your phone without snooping by marketing people and three-leter agencies, content distribution network without vendor lock-ins. The possibilities are impressive if we have a way to do it without TURN servers that cost money and create a technical and legal bottlenecks.
We can't say nobody wants that world because we've never tried it in the first place. I definitely would like to see that.
Don't you think everyone should have the option to be a full participant? Being locked behind cloud providers and multiple layers of NAT with IPv4 means that can never happen, even if consumers want it to.
I was lucky enough to experience the 90's internet where static IP addresses were common. I had a /24 (legacy "class C" block) routed to my home, and still do.
> Exactly. IPv6 was developed in the ivory towers where it was still assumed that everyone wanted to be a full participant of the internet.
IPv6 was developed in the open on mailing lists that anyone could subscribe to:
The criteria presented here were culled from several sources,
including "IP Version 7" [1], "IESG Deliberations on Routing and
Addressing" [2], "Towards the Future Internet Architecture" [3], the
IPng Requirements BOF held at the Washington D.C. IETF Meeting in
December of 1992, the IPng Working Group meeting at the Seattle IETF
meeting in March 1994, the discussions held on the Big-Internet
mailing list (big-internet-at-munnari.oz.au, send requests to join to
big-internet-request-at-munnari.oz.au), discussions with the IPng Area
Directors and Directorate, and the mailing lists devoted to the
individual IPng efforts.
Just like all current IETF discussions are in the open and free for all to participate. If you don't like the direction things are going in participate: as Gandhi did (not) say, “Be the change you want to see in the world.”
One of the co-authors on that RFC worked at BBN: you know, the folks that actually built the first the routers (IMPs) that created the ARPA/Internet in the first place. I would hazard to guess they have know something about network operations.
> But the social/political environment was that everyone just wants to be a passive consumer, paying monthly fees to centralized hosts to spoon-feed them content through an algorithm.
Disagree, especially with the hoops that users and developers have to jump through to deal with (CG-)NAT:
> [Residential customers] don't care about engineering, but they sure do create support tickets about broken P2P applications, such as Xbox/PS gaming applications, broken VoIP in gaming lobbies, failure of SIP client to punch through etc. All these problems don't exist on native routed (and static) IPv6.
Well, with such a description of the 'vices' of IPv6 vs the 'virtues' of IPv4 count me as one who considers himself in full support of the ivory towered greybeards who decided the 'net was meant to be more than a C&C network for sheeple. Once I got a /56 delegated by my IAP - which coincided with me digging down the last 60 metres of fibre conduit after which our farm finally got a real network connection instead of the wires-on-poles best-effort ADSL connection we had before that - I implemented IPv6 in nearly all - but not all - services. Not all of them, no, because IPv6 can make life harder than it needs to be. Internally some services still run IPv4 only and will probably remain doing so but everything which is meant to be reachable from outside can be reached through both IPv4 as well as IPv6. I recently started adding SIP services which might be the first instance of something which I'll end up going IPv6-only due to the problems caused by NATting the SIP control channels as well as the RTP media channels, something reminiscent of how FTP could make life difficult for those on the other side of firewalls and NAT routers. With IPv6 I do not need NAT so as long as the SIP clients support it I should be OK. Now that last bit, client support... yes, that might be a problem sometimes.
IPv6 is used on mobile networks since there aren't enough IPv4 addresses. Some of these mobile networks are so big there aren't even enough private IPv4 addresses for their CG-NAT private side to fit, leaving the only clean solution being NAT64/DNS64.
Why would CGNAT be deployed as a response to IPv6 on mobile? I don't understand the logic there. CGNAT is deployed due to a shortage of publicly routable IPv4 addresses. IPv6 was introduced due to having much larger publicly routable space.
No, CGNAT (Carrier-Grade NAT - https://en.wikipedia.org/wiki/Carrier-grade_NAT) is an IPv4 only thing. https://www.rfc-editor.org/rfc/rfc6598 specifies they should use 100.64.0.0/10 for it, to avoid conflicting with the pre-existing private-use ranges. IPv6 removes the need for using CGNAT, as each home router is allocated a public IP (rather than a CGNAT IP) on its public link.
No, CGNAT has absolutely nothing to do with IPv6. CGNAT is nothing more than ISPs not providing a public IP to the gateway on your LAN (i.e. your router). To avoid conflicts with existing ranges, a new ranges for that purpose was allocated. There are different technologies to enable IPv4<->IPv6, none of which care about the existence of CGNAT.
No, NAT64 was invented so v6-only hosts could access v4-only resources. CGNAT was invented so v4 hosts can have a v4 address without having to purchase limited public address space.
Currently we do shadow shifts for a month or two first, but still eventually drop people into the deep end with whatever experience production gifts them in that time. That experience is almost certainly going to be a subset of the types of issues we see in a year, and the quantity isn’t predictable. Even if the shadowee drives the recovery, the shadow is still available for support & assurance. I don’t otherwise have a good solution for getting folks familiar with actually solving real-world problems with our systems, by themselves, under severe time pressure, and I was thinking controlled chaos could help bridge the gap.
You are making things harder for newer hires than the environment you came into. It is a sink over swim strategy that introduces stress without any apparent compensation in training. It creates new bases for evaluation you were not subject to.
Hazing us a cycle of abuse that expresses in a magnification of the abuse inflicted in the hazing than was suffered in the previous cycle.
Thanks for this perspective, I think I’ll reconsider this plan (to be clear, haven’t done it) and try to think up some alternative training strategy that doesn’t involve live issues.
I'm waiting for the good AI powers software.... Any day now.
Ideally, llm should be able to provide the capability to translate from memory inefficient languages to memory efficient languages, and maybe even optimize underlying algorithms in memory use for this.
That's what I just said. There is zero value to me knowing these numbers. I assume that all python built in methods are pretty much the same speed. I concentrate on IO being slow, minimizing these operations. I think about CPU intensive loops that process large data, and I try to use libraries like numpy, DuckDB, or other tools to do the processing. If I have a more complicated system, I profile its methods, and optimize tight loops based on PROFILING. I don't care what the numbers in the article are, because I PROFILE, and I optimize the procedures that are the slowest, for example, using cython. Which part of what I am saying does not make sense?
As others have pointed out, Python is better used in places where those numbers aren't relevant.
If they start becoming relevant, it's usually a sign that you're using the language in a domain where a duck-typed bytecode scripting-glue language is not well-suited.
I’d like a turnkey k3s and a 10” rack designed for consumers. Set up to host your Minecraft server, store your media, and be incrementally upgradeable.
Every PR Claude makes needs to be reviewed. Every single one. So great! You have 10 instances of Claude doing things. Great! You're still going to need to do 10 reviews.
reply