Hacker Newsnew | past | comments | ask | show | jobs | submit | ghostpepper's commentslogin

I can sort of understand this. There are certain songs, eg. a song from my wedding, that hit like a (good) ton of bricks every time I hear them, but I wouldn't want to listen to it every day because I feel like I would have more and more banal experiences cumulatively associating with the song until the wedding feeling becomes just one of many and starts to lose its association.

chatGPT 5.2 is smarmy, condescending and rude. the new personality is really grating and what's worse it seems to have been applied to the "legacy" 5.1 and 5.0 models as well.


Does a human review every sticker before it's ever shown to a child? If not, it's only a matter of time before the AI spits out something accidentally horrific.


I searched their site for any information on "how" they can claim it's safe for kids. This is what I could find: https://stickerbox.com/blogs/all/ai-for-kids-a-parent-s-guid...

> No internet open browsing or open chat features. > AI toys shouldn’t need to go online or talk to strangers to work. Offline AI keeps playtime private and focused on creativity.

> No recording or long-term data storage. > If it’s recording, it should be clear and temporary. Kids deserve creative freedom without hidden mics or mystery data trails.

> No eavesdropping or “always-on” listening > Devices designed for kids should never listen all the time. AI should wake up only when it’s invited to.

> Clear parental visibility and control. > Parents should easily see what the toy does, no confusing settings, no buried permissions.

> Built-in content filters and guardrails. > AI should automatically block or reword inappropriate prompts and make sure results stay age-appropriate and kind."

Obviously the thing users here know, and "kid-safe" product after product has proven, is that safety filters for LLMs are generally fake. Perhaps they can exist some day, but a breakthrough like that isn't gonna come from an application-layer startup like this. Trillion dollar companies have been trying and failing for years.

All the other guardrails are fine but basically pointless if your model has any social media data in its dataset.


They fail their own checklist in that article.

> Here’s a parent checklist for safe AI play:

> [...] AI toys shouldn’t need to go online

From the FAQ:

> Can I use Stickerbox without Wi-Fi?

> You will need Wi-Fi or a hotspot connection to connect and generate new stickers.


I'm sure you are correct about being able to do some clever prompting or tricks to get it to print inappropriate stickers, but I believe in this case it may be OK.

If you consider a threat model where the threat is printing inappropriate stickers, who are the threat actors? Children who are attempting to circumvent the controls and print inappropriate stickers? If they already know about topics that they shouldn't be printing and are trying to get it to print, I think they probably don't truly _Need_ the guardrails at that point.

In the same way many small businesses don't (most likely can't even afford to) opt to put security controls in place that are only relevant to blocking nation state attackers, this device really only needs enough controls in place to prevent a child from accidentally getting an inappropriate output.

It's just a toy for kids to print stickers with, and as soon as the user is old enough to know or want to see more adult content they can just go get it on a computer.


ChatGPT allegedly has similar guardrails in place, and now has allegedly encouraged minors to commit self-harm. There is no threat actor, it's not a security issue. It's an unsolved, and as far as we know intrinsic problem with LLMs themselves.

The word "accidentally" is slippery, our understanding of how accidents can happen with software systems is not applicable to LLMs.


It's too bad because it's such a great project otherwise. He puts a ton of free labour into the system and I'm sure he's dealt with some entitled users but it's really a huge reason I don't recommend it to more people. Actively telling people they must learn to solder and making the only support channel on telegram are two big turn-offs for a lot of people.

This is absolutely his right and perhaps his intention to keep the project small, but in that case I wish there was another alternative vacuum firmware project.


For ChatGPT you can turn this memory off in settings and delete the ones it's already created.


I'm not complaining about the memory at all. I was complaining about the suggestion to continue with unrelated topics.


Same, I use chatgpt plus (the entry-level paid option) extensively for personal research projects and coding, and it seems miles ahead of whatever "Gemini Pro" is that I have through work. Twice yesterday, gemini repeated verbatim a previous response as if I hadn't asked another question and told it why the previous response was bad. Gemini feels like chatGPT from two years ago.


Peak swipe-to-text was on my HTC Desire circa 2010 using the third-party keyboard Swype. Everything since then has been a downgrade.


I remember when Swiftkey first launched on Android, the swipe-to-text was extremely good and the built-in "learning by itself" dictionary worked well too. Of course, it seems like Microsoft at one point bought it, so I don't even have to try it again to understand the current state of it.


I still refer to doing it on iPhone as swyping. The portmanteau has permanently genericized in my brain. Those were the days!


Aren't virtually all SBCs made in China?


I was referring to Board Support Packages


A lot of the complaints here don't make a lot of sense and read like the author has never used an embedded linux device. The previously reported bugs are more substantial - hardcoded secrets for JWT access and firmware encryption, everything running as root, etc.

However, "Chinese product uses Chinese DNS servers and it's hard to change them" or "no systemd nor apt installed" are totally expected and hardly make it "riddled with security flaws". Same with tcpdump and aircrack being installed - these hardly compromise the security more than having everything run as root.

I would expect most users of this device will not be exposing the web interface externally, and the fact that they ship with Tailscale installed is actually impressive. I can't imagine the lack of CSRF protection will be a vulnerability for 99% of users.

I am curious what the "weird" version of wireguard the author refers to but based on their apparent lack of knowledge on embedded systems in general I would not be shocked to find that it's totally innocuous.


I think you haven't gone far enough. Most of this thread is rampant ignorance and propaganda influenced bandwagoning.

1) It's from a company known for dev boards and SoCs- not consumer products.

2) The code is available on GitHub (nice!)

3) SiSpeed actively contributes to the mainline linux kernel for RISC-V in general as well as their SoCs.

4) Security in Embedded Applications is just... Bad. Amercian, Chinese, European, Russian, Indian- it doesn't matter.


Also what do you really expect for 30€ or 60€ price point? On relatively low volume product. It even doing what is promised is already a good start to me. And that probably tells their priorities. Start from some already working image with wide support for features. And then add the features that are needed in specific use case. And then ship it.


Hanlon's Razor at work; most of the shortfalls described in the article points to incompetence more than malice.

Though I find it strange though, because I would call this the shortcomings of a crowdfunded project, but the author took it as a malicious and planned act to take over target computers and networks.

As far as I remember, some of the botnets are formed by routers that vendors refused to patch, because they're no longer being sold and not profitable to do so.


yeah.. their list of issues speaks more to their lack of experience and understanding of linux and embedded linux devices wrapped in xenophobic nonsense...


Obviously hindsight is 20/20 but this sentiment just reeks with comical levels of hubris

> However, the new research demonstrates that the magnetic field of light, long thought irrelevant,


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: