I think a major factor in the hype is that it's especially useful to the kind of people with a megaphone: bloggers, freelance journalists, people with big social media accounts, youtubers, etc. A lot of project management and IFTTT-like automation type software gets discussed out of proportion to how niche it is for the same reason. Just something to keep in mind, I don't think it's some crypto conspiracy just a mismatch between the experiences of freelance writers vs everyone else.
While the popular thing when discussing the appeal of Clawdbot is to mention the lack of guardrails, personally I don't think that's very differentiating, every coding agent program has a command line flag to turn off the guardrails already and everyone knows that turning off the guardrails makes the agents extremely capable.
Based on using it lightly for a couple of days on a spare PC, the actual nice thing about Clawdbot is that every agent you create is automatically set up with a workspace containing plain text files for personalization, memories, a skills folder, and whatever folders you or the agents want to add. Everything being a plain text/markdown file makes managing multiple types of agents much more intuitive than other programs I've used which are mainly designed around having a "regular" agent which has all your configured system prompts and skills, and then hyperspecialized "task" agents which are meant to have a smaller system prompt, no persistent anything, and more JSON-heavy configuration. Your setup is easy to grok (in the original sense) and changing the model backend is just one command rather than porting everything to a different CLI tool.
Still, it does very much feel like using a vibe coded application and I suspect that for me, the advantages are going to be too small to put up with running a server that feels duct taped together. But I can definitely see the appeal for people who want to create tons of automations. It comes with a very good structure for multiple types of jobs (regular cron jobs, "heartbeat" jobs for delivering reminders and email summaries while having the context of your main assistant thread, and "lobster" jobs that have a framework for approval workflows), all with the capability to create and use persistent memories, and the flexibility to describe what you need and watch the agent build the perfect automation for it is something I don't think any similar local or cloud-based assistant can do without a lot of heavier customization.
They're offering 50% off the subscription to people who used to have Enhanced Autopilot [1]. As I predicted when the CEO's compensation plan had a part tied to FSD subscriptions, they are going to push more people onto it by bundling more features and cutting the price.
Reminds me of when an ISP offered me a discount if I would agree to sign up with their partnered TV service. I agreed on the condition that I didn't have to rent a box. But you can't use the service without a box ... ? Who cares, I got a discount.
As hinted with the Finder comment, "Spotlight" is behind much more than the command-space search box. I don't know what the Siri services might do other than Siri itself, but wouldn't shock me if they were involved in things like Shortcuts and Control Center widgets. I understand thinking things you don't use are simply a "waste of CPU and storage space", but this reads like the kind of posts I used to see in the Windows XP era where people would open Task Manager and kill random processes they didn't understand. Best to make a little more effort to understand what the OS is doing before taking a scalpel to it. Or if you'd rather not, there's always OpenBSD (being serious here, it's pretty cool).
If some process is going to take hours of cpu time, it should be opt in. At a minimum I’d like to be able to turn the bloody things off if I don’t want them.
I run cpu usage meters in my menu bar. The efficiency cores always seem busy doing one thing or another on modern macOS. It feels like Apple treats my e-cores as a playground for stupid features that their developers want a lot more than I do - like photoanalysisd, or file indexing to power spotlight, that hasn’t worked how I want it to for a decade.
I have a Linux workstation, and the difference in responsiveness is incredible. Linux feels like a breath of fresh air. On a technical level, my workstation cpu is barely any faster. But it idles at 0%. Whenever I issue a command, I feel like the computer has been waiting for me and then it springs to action immediately.
To your point, I don’t care why these random processes are using all my cpu. I just want them to stop. I paid good money for my Apple laptop. The computer is for me. I didn’t pay all that money so some Apple engineer can vomit all over with their crappy, inefficient code.
What a trash article. Why is the only photo, used to illustrate the point about narrow buildings, a photo of Manhattan instead of anything in Japan? When "our zoning laws" are enumerated, where are they talking about? Last time I checked there were no US federal rules on parking spaces. At least they acknowledge that multiple jurisdictions exist when talking about health codes. And as per usual when talking about Japan, they ignore the fact that Japan also has car-dependent suburbs and rural areas, where it is quite common for restaurants outside of city centers to need to balance costs with the need for a larger footprint and a parking lot. The role of culture in eating habits is also ignored, Americans take more pride in the self-reliance of cooking their own meals.
Thanks for the questions. I used a picture of Manhattan intentionally to show that it is possible in some parts of America. There's already tons of pictures of that type of building in Japan, where it originated.
The zoning laws are at the local and city level, as are the parking spaces.
Japan does have car dependent suburbs and rural areas, I'm not saying they don't. It's likely that Japan's $4 meals are concentrated in not-rural areas.
I really doubt that Americans take pride in not having cheap lunch options if they want them.
Seems like this article misses the enterprise angle which is the main question. I'm sure some gamers aching for an upgrade will sign up for cloud PCs while RAM is overpriced, just like how Geforce Now had a moment while GPUs were overpriced. But does it make any sense for businesses with massive fleets of Windows laptops, and might already have some kind of VDI setup, to replace them with thin clients? Would need some significant progress on the hardware.
I'm not really a big gamer but was looking into buying an xbox again. I already had a controller and thought why not try xbox cloud gaming on my Samsung TV.
With a decent internet connection I now struggle to see why anyone would want to buy a hardware Xbox. Games on the cloud version load instantly, play brilliantly and cost the same as the usual Game Pass as far as I can tell. The catalogue seems smaller maybe but aside from that I see little downside.
I could see it working well for PCs too - as long as the terminal device is seamless. I guess us devs have been renting computers in "the cloud" for decades anyway.
On the other hand I'm a software engineer and my incredibly powerful MacBook could be not much more than a fancy dumb terminal - to be honest it almost is already.
If I can play a very responsive multiplayer game of the latest call of duty on my $300 TV with a little arm chip in it, then I could well imagine doing my job on a cloud Mac if the terminal device looked and felt like a MacBook but had the same tiny CPU my TV has.
Not sure if I'd choose it as a personal device but for corporations it seems a no brainer.
Sure, why not? A lot of them are already heavily invested in ms services. Where I work, laptops are on three year leases, they’d be easy to switch if the IT suits thought it would be cheap enough.
I'd say you are wrong on gamers aching for this. Any amount of latency ruins games, even turn based games lose a lot of their enjoyment when the ui starts getting delayed from user input.
Linux malware looks different usually. This kind of plugin based framework running as its own process is uncommon, but web shells with similar functionality have been around for a while. And bad guys like working in the shell on Linux too, just a simple binary that reads commands from a socket is often all they need, but doesn't make for very fascinating blog posts. Some just install cloudflared, nothing custom needed at all.
You mean the costs? Travel is expensive but I don't think there's much of an argument that it's gotten more expensive like housing. International flights used to be quite a luxury, now it's so easy that the popular destinations are getting swamped.
Most of the relaxation-oriented holiday industry is definitely designed for couples and families, but backpacking, adventures, and cultural immersion are, in my opinion, better alone so that you don't have the easy escape of sitting around with your partner. And if you want to relax nothing's stopping you from booking a few nights alone at a Japanese onsen or one of those treehouse style resorts in a Central American rainforest. I've spent many nights at onsens in between more outdoorsy climbing and skiing legs of trips in Japan.
To be fair big tech did do a full court press to stop site blocking when such a law (SOPA/PIPA) was proposed in the US, and they continue to oppose the MPA's attempts to get site blocking via the courts. DMCA on the other hand seems very broken, don't give the MPA the "3 strikes" regime they want and you get sued into the ground like Cox. I suspect tech CEOs don't complain about this because they don't want the same treatment.
If someone can write instructions to download a malicious script into an codebase, hoping an AI agent will read and follow them, they could just as easily write the same wget command directly into a build script or the source itself (probably more effective). In that way it's a very similar threat to the supply chain attacks we're hopefully already familiar with. So it is a serious issue but not necessarily one we don't know how to deal with. The solutions (auditing all third party code, isolating dev environments) just happen to be hard in practice.
Given the displeasure a lot of developers have towards AI, I would not be surprised if such attacks became more common. We’ve seen artists poisoning their uploads to protect them (or rather, try and take revenge), I don’t doubt it might be the same for a non-negligible part of developers.
Yes, fetching arbitrary webpages is its own can of worms. But feels less intractable to me, it's usually easy to disable web search tools by policy without hurting the utility of the tools very much (depends on use case of course).
Part of the problem here is all the vendor lock in with the tools. It's a new category so it's to be expected, but currently any company that sells an enterprise cloud platform kind of needs their own AI coding tool suite to be competitive.
I would have expected IBM to buy and integrate another AI coding company or license one, instead of trying to build it themselves. IBM doesn't have a good track record of building products. Maybe they didn't have time, or were convinced it was too easy.
While the popular thing when discussing the appeal of Clawdbot is to mention the lack of guardrails, personally I don't think that's very differentiating, every coding agent program has a command line flag to turn off the guardrails already and everyone knows that turning off the guardrails makes the agents extremely capable.
Based on using it lightly for a couple of days on a spare PC, the actual nice thing about Clawdbot is that every agent you create is automatically set up with a workspace containing plain text files for personalization, memories, a skills folder, and whatever folders you or the agents want to add. Everything being a plain text/markdown file makes managing multiple types of agents much more intuitive than other programs I've used which are mainly designed around having a "regular" agent which has all your configured system prompts and skills, and then hyperspecialized "task" agents which are meant to have a smaller system prompt, no persistent anything, and more JSON-heavy configuration. Your setup is easy to grok (in the original sense) and changing the model backend is just one command rather than porting everything to a different CLI tool.
Still, it does very much feel like using a vibe coded application and I suspect that for me, the advantages are going to be too small to put up with running a server that feels duct taped together. But I can definitely see the appeal for people who want to create tons of automations. It comes with a very good structure for multiple types of jobs (regular cron jobs, "heartbeat" jobs for delivering reminders and email summaries while having the context of your main assistant thread, and "lobster" jobs that have a framework for approval workflows), all with the capability to create and use persistent memories, and the flexibility to describe what you need and watch the agent build the perfect automation for it is something I don't think any similar local or cloud-based assistant can do without a lot of heavier customization.
reply