Nearly everything about this screams I, Robot and it is kinda wild that they went that route with this article. The package delivery and the quick intro and head turning in particular.
I agree on the data part. I love the potential idea of a humanoid robot at home to take care of chores, but now it seems like the potential for it not being constantly connected and collecting data is gone out the window.
I find it quite strange that they are openly bragging about how much data it will be gathering and uploading from within your home. That feels like the part you would not say out loud.
Do you mean "I, Robot", not iRobot the vacuum company? And if so, I'm guessing you're referring to the movie with Will Smith? The original book of short stories isn't really dystopian, it's more just an interesting exploration of Asimov's concept of how robots would work.
Robotics AI has a massive "training data bottleneck" issue. If you aren't using your deployed robot fleet to get more real world training data, you're just stupid.
It's just that sci fi authors try to see into the future and have to write things interesting. There's two ways:
- novel idea or technology
- counterintuitive effect of technology
I think the second is easier written as "what if Good Thing was actually Bad". So that's what you get. The former style is perhaps still available in books like Children of Time by Adrian Tchaikovsky.
But the latter style is much more readily written and consequently has dominated sci fi as more authors enter the field.
The Torment Nexus view is mostly driven by context blindness. "oh my god, they'll scan the mother's blood to perform eugenics if they have sequencing technology and it will be horrible". Well, advanced societies do that a lot: Down's is scanned for using a Maternal Serum Alpha Foetoprotein test. "oh my god, they'll use ultrasounds to find undesirable genetics, torment Nexus" but Nuchal Translucency tests are fairly routine in advanced societies and we're fine with them.
This might appear like a fixation on dystopian literature to others. "omg gattaca this MSAFP". It's just generic technoluddism because almost all near future tech is explored via sci fi in the "what if Good is Bad" genre.
No, I'm not. I'm simply saying that whether the outcomes are going to be positive or negative, it will always seem like the Torment Nexus. Therefore, something sounding like the Torment Nexus does not provide information towards a prediction that it will be the Torment Nexus.
People warned about the dangers of social media (or with modern LLMs + Diffusion Models and scamming) and that's kinda come true, but people also warned about the dangers of IVF and that's just been good. So what happens is that people always warn about the dangers. Humans are loss-averse so they find it easy to do that.
It is unsurprising that every new tech seems like dystopian literature because there's a lot of dystopian literature focused on the near future and we're good at coming up with negative hypotheses. There is no significance in it.
They also have a free tier for the hosted version that is pretty generous (64 sites). I used the free hosted version for years after Reader went away and only upgraded as a way to support software that I use and enjoy regularly.
This will be great. I only noticed two issues with the mobile app, and one of them had to do with performance - it would get "stuck" uploading an image to the server, and no amount of waiting would let it complete. So I'd have to restart the app, and it would always start over and check every image that was in the library before uploading would begin again.
The second issue is still related to timestamps from iCloud photos. The date that's on the photo in iCloud is not respected when uploading to Immich, meaning photos tagged from 40-90 years ago show up as being taken today.
The only problem I've had with it so far is that the date on photos coming from icloud is when they were uploaded, not the date that the photo was created or even the date that I've marked the photo as being taken. Makes seeing photos from 90 years ago kind of strange.
I dunno, I feel like we’ve seen this play often enough - “option to opt-out” is absolutely going to be the first feature slated for elimination on the product roadmap - “after all, only 5% of customers are using it.”
The terms "opt-in" and "opt-out" indicate what the default is, so "... by default" is redundant. "Opt-in" means that you can opt (choose) to be in while the default is out.
In this case, since the default is in unless you opt out, it's opt-out.
I agree with everything you’ve said, but also am happy that they’re forcing users both new and existing to make a choice to continue using Claude under the new terms, rather than silently starting to train for existing users who take no action.
Like you, I would have preferred that the UI for the choice didn’t make opt-in the default. But at least, this is one of the rare times where a US company isn’t simply assuming or circumventing consent from existing users in countries without EU-style privacy laws who ignore the advance notification. So thank you Anthropic for that form of respect.
A simple, open-source spam filtering approach gets rid of 99.99% of spam. My total filtered email volume in a day is in the single to low double digits in the personal account and double digits at work. This is very much in range for LLM filtering of mail that passes the mechanical spam filter.
Doing that requires write access if you're a Github Application. You can't just fork repositories back into another org, since Github Applications only have the permissions of the single organization that they work with. Rulesets that prevent direct pushes to specific branches can help here, but have to be configured for each organization.
Seems like there are multiple ways to address that within the GitHub ecosystem.
For example, you can set up a GitHub Action trigged by `push_request_target` that will call CodeRabbit's API to generate a patch and then push a new commit to the branch. This way CodeRabbit is being polled by a well-defined and minimal action (since this action will have write access to repo) rather than it itself having crazy power to do anything it wants on your repository.
Alternatively, why can't they just comment and propose a patch? GitHub's code review UI allows the human code reviewer to hit a button and incorporate that change into the PR.
There are pros and cons to these other techniques but the clear pro is that it would be more secure.
It just seems like they took the easiest way out rather than thinking it through in typical AI-bro ways.
I ended up getting several Tasmota based devices from https://www.athom.tech/. Run your own RabbitMQ like service to control them, and no internet needed. They're super cheap, open-source, and flashable with your own firmware if you want.
If you watch [this Climate Town video](https://www.youtube.com/watch?v=8CkgCYPe68Q), then absolutely not, the disposable fast fashion we have today is not better. It's cheaper, but it's not higher quality, it requires trans-continental shipping, and it absolutely gets thrown away in ridiculous amounts.
Overall, it's worse in just about every metric other than "I can get this fun shirt online at 2am for $6."
> allowing the entire fleet to upload terabytes of data for continuous learning and improvement
Ugh.
Edit: Yes, I meant I, Robot the film. U.S. Robotics and the like.