Hacker Newsnew | past | comments | ask | show | jobs | submit | dpcx's commentslogin

Maybe it's just me, but everything about this announcement feels very I, Robot... and not in a good way.

> allowing the entire fleet to upload terabytes of data for continuous learning and improvement

Ugh.

Edit: Yes, I meant I, Robot the film. U.S. Robotics and the like.


Nearly everything about this screams I, Robot and it is kinda wild that they went that route with this article. The package delivery and the quick intro and head turning in particular.

I agree on the data part. I love the potential idea of a humanoid robot at home to take care of chores, but now it seems like the potential for it not being constantly connected and collecting data is gone out the window.

I find it quite strange that they are openly bragging about how much data it will be gathering and uploading from within your home. That feels like the part you would not say out loud.


> U.S. Robotics and the like.

The modem[1] folks? :)

[1]: https://en.wikipedia.org/wiki/USRobotics


Do you mean "I, Robot", not iRobot the vacuum company? And if so, I'm guessing you're referring to the movie with Will Smith? The original book of short stories isn't really dystopian, it's more just an interesting exploration of Asimov's concept of how robots would work.


Robotics AI has a massive "training data bottleneck" issue. If you aren't using your deployed robot fleet to get more real world training data, you're just stupid.


Yeah tech companies have a weird fixation on using dystopian literature as their entire branding playbook



It's just that sci fi authors try to see into the future and have to write things interesting. There's two ways:

- novel idea or technology

- counterintuitive effect of technology

I think the second is easier written as "what if Good Thing was actually Bad". So that's what you get. The former style is perhaps still available in books like Children of Time by Adrian Tchaikovsky.

But the latter style is much more readily written and consequently has dominated sci fi as more authors enter the field.

The Torment Nexus view is mostly driven by context blindness. "oh my god, they'll scan the mother's blood to perform eugenics if they have sequencing technology and it will be horrible". Well, advanced societies do that a lot: Down's is scanned for using a Maternal Serum Alpha Foetoprotein test. "oh my god, they'll use ultrasounds to find undesirable genetics, torment Nexus" but Nuchal Translucency tests are fairly routine in advanced societies and we're fine with them.

This might appear like a fixation on dystopian literature to others. "omg gattaca this MSAFP". It's just generic technoluddism because almost all near future tech is explored via sci fi in the "what if Good is Bad" genre.


I mean, you're definitely assuming positive outcomes here too. Far too early to tell how most tech will end up being used.


No, I'm not. I'm simply saying that whether the outcomes are going to be positive or negative, it will always seem like the Torment Nexus. Therefore, something sounding like the Torment Nexus does not provide information towards a prediction that it will be the Torment Nexus.

People warned about the dangers of social media (or with modern LLMs + Diffusion Models and scamming) and that's kinda come true, but people also warned about the dangers of IVF and that's just been good. So what happens is that people always warn about the dangers. Humans are loss-averse so they find it easy to do that.

It is unsurprising that every new tech seems like dystopian literature because there's a lot of dystopian literature focused on the near future and we're good at coming up with negative hypotheses. There is no significance in it.


Dystopian literature was training data and road-mapping.


Unless I misunderstand, it also misses that Newsblur is open source and can be self hosted https://github.com/samuelclay/NewsBlur


They also have a free tier for the hosted version that is pretty generous (64 sites). I used the free hosted version for years after Reader went away and only upgraded as a way to support software that I use and enjoy regularly.


This will be great. I only noticed two issues with the mobile app, and one of them had to do with performance - it would get "stuck" uploading an image to the server, and no amount of waiting would let it complete. So I'd have to restart the app, and it would always start over and check every image that was in the library before uploading would begin again.

The second issue is still related to timestamps from iCloud photos. The date that's on the photo in iCloud is not respected when uploading to Immich, meaning photos tagged from 40-90 years ago show up as being taken today.


The only problem I've had with it so far is that the date on photos coming from icloud is when they were uploaded, not the date that the photo was created or even the date that I've marked the photo as being taken. Makes seeing photos from 90 years ago kind of strange.


Does iCloud by any chance strip the exif data from the photos, so the real date is simply not available anymore?


It does not


macOS on arm, you can download the ios app and install it. That's what I did to import my wife's photos.


I don't love that this is opt-in by default, but I'm happy that they're at least offering an opt-out.


I dunno, I feel like we’ve seen this play often enough - “option to opt-out” is absolutely going to be the first feature slated for elimination on the product roadmap - “after all, only 5% of customers are using it.”


The terms "opt-in" and "opt-out" indicate what the default is, so "... by default" is redundant. "Opt-in" means that you can opt (choose) to be in while the default is out.

In this case, since the default is in unless you opt out, it's opt-out.


I agree with everything you’ve said, but also am happy that they’re forcing users both new and existing to make a choice to continue using Claude under the new terms, rather than silently starting to train for existing users who take no action.

Like you, I would have preferred that the UI for the choice didn’t make opt-in the default. But at least, this is one of the rare times where a US company isn’t simply assuming or circumventing consent from existing users in countries without EU-style privacy laws who ignore the advance notification. So thank you Anthropic for that form of respect.


The terms say it is opt-out not opt-in, despite the word play.

> We may use Materials ... unless you opt out of training through your account settings.

[1] https://github.com/OpenTermsArchive/GenAI-versions/commit/d8...


It's opt-out, so it's in by default. Opt-in would mean it's out by default and would be a good thing.


Seems like an llm keeping up with the amount of email that people receive would be cost prohibitive, either in dollars or cpu time.


A simple, open-source spam filtering approach gets rid of 99.99% of spam. My total filtered email volume in a day is in the single to low double digits in the personal account and double digits at work. This is very much in range for LLM filtering of mail that passes the mechanical spam filter.


I have an appscript that uses Gemini for this. Works great. Usage is in in the free tier. I even had Gemini write the appscript.


Please share :-)


Because it has the ability to write tests for the PR in question.


Then it should open a PR for those tests so it can go through the normal CI and review process.


Doing that requires write access if you're a Github Application. You can't just fork repositories back into another org, since Github Applications only have the permissions of the single organization that they work with. Rulesets that prevent direct pushes to specific branches can help here, but have to be configured for each organization.


It updates the existing PR with the tests, I believe. They'd still get reviewed and go through CI.


Right, the downside being that the app needs write access to your repository.


Writing to PR branches should really be some new kind of permission.


Seems like there are multiple ways to address that within the GitHub ecosystem.

For example, you can set up a GitHub Action trigged by `push_request_target` that will call CodeRabbit's API to generate a patch and then push a new commit to the branch. This way CodeRabbit is being polled by a well-defined and minimal action (since this action will have write access to repo) rather than it itself having crazy power to do anything it wants on your repository.

Alternatively, why can't they just comment and propose a patch? GitHub's code review UI allows the human code reviewer to hit a button and incorporate that change into the PR.

There are pros and cons to these other techniques but the clear pro is that it would be more secure.

It just seems like they took the easiest way out rather than thinking it through in typical AI-bro ways.


It's more than that. If can suggest fixes which you can directly commit.


I ended up getting several Tasmota based devices from https://www.athom.tech/. Run your own RabbitMQ like service to control them, and no internet needed. They're super cheap, open-source, and flashable with your own firmware if you want.


If you watch [this Climate Town video](https://www.youtube.com/watch?v=8CkgCYPe68Q), then absolutely not, the disposable fast fashion we have today is not better. It's cheaper, but it's not higher quality, it requires trans-continental shipping, and it absolutely gets thrown away in ridiculous amounts.

Overall, it's worse in just about every metric other than "I can get this fun shirt online at 2am for $6."


And helped spread microsplastics to every corner of the Earth


Disposable fashion is driven by customer preferences. Lots of people like frequently buying cheap clothes.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: