I don't know of any, but my strategy to avoid slop has been to read more long-form content, especially on blogs. When you subscribe over RSS, you've vetted the author as someone who's writing you like, which presumably means they don't post AI slop. If you discover slop, then you unsubscribe. No need for a platform to moderate content for you... as you are in control of the contents of your news feed.
I think incentives is the right way to think about it. Authentic interactions are not monetized. So where are people writing online without expecting payment?
Blogs can have ads, but blogs with RSS feeds are a safer bet as it's hard to monetize an RSS feed. Blogs are a great place to find people who are writing just because they want to write. As I see more AI slop on social media, I spend more time in my feed reader.
I've been thinking recently about a search engine that filters away any sites that contain advertising. Just that would filter away most of the crap.
Kagi's small web lens seems to have a similar goal but doesn't really get there. It still includes results that have advertising, and omits stuff that isn't small but is ad free, like Wikipedia or HN.
Monetization isn't the only possible incentive for non-genuine content though CV-stuffing is another that is likely to affect blogs - and there have been plenty obviously AI-generated/"enhanced" blogs posted here.
The stats I see for Facebook are $70 per US/Canadian user in ad revenue. I'm not sure how much people would be willing to pay for an ad free Facebook, but it must be below $70 on average. And as the parent comment said, the users who would pay that are likely worth much more than the average user to the advertisers.
For the users who refuse to see ads, they'd either use a different platform or run an ad blocker (especially using the website vs the app).
I'm seeing "Thanos committing fraud" in a section about "useful lies". Given that the founder is currently in prison, it seems odd to consider the lie useful instead of harmful. It kinda seems like the AI found a bunch of loosely related things and mislabeled the group.
If you've read these books I'm not seeing what value this adds.
Be careful with the 'utility' model of explaining behavior. It is fairly easy to slide into 'if behavior X is manifested, this must mean X must somehow be useful'. You can use this model to explain behavior, but be aware of the circularity trap in the model. "She lied thus the lie must have had use, even if it is not obvious we will discover the utility if we dig down enough".
Another model can be post-rationalization. People just do stuff instinctively, then rationalize why they did them after the fact. "She lied without thinking about it, then constructed a reasoning why the lie was rational to begin with".
At the extremes, some people will never lie, even to their detriment. Usually they seem to attribute this to virtue. Others will always lie. They seem to feel not lying is surrendering control. Most people are somewhere in between.
This is a good opportunity to assess what parts of your own online activity could be impacted by an attacker in the middle (assisted by a BGP leak or otherwise) and, if you're a service provider, how you can protect your customers.
At first pass you probably use HTTPS/TLS for the web, and you know that you shouldn't click through invalid certificate warnings. So the web, tentatively, looks pretty safe.
Email jumps out as vulnerable to eavesdropping, as we largely use opportunistic encryption when transferring messages between mail servers and an on-network-path attacker can use STARTTLS stripping or similar techniques. Most mail servers happily send using cleartext or without validating the TLS certificate. Check that you and your counter-parties are using DNSSEC+DANE, or MTA-STS to ensure that authenticated encryption is always used. Adoption is still quite low, but it's a great time to get started. Watch out for transactional email, like password reset messages, which virtually never validate encryption in transit (https://alexsci.com/blog/is-email-confidential-in-transit-ye... ; instead use multi-factor encryption).
TLS certificates themselves are at risk, unfortunately. An attacker who controls the network in-and-out of your DNS servers can issue domain-verified certificates for your domain; even removing protections like CAA records. DNSSEC is the classic solution here, although using a geographically distributed DNS provider should also work (see multi-perspective validation). Certificate transparency log monitoring should detect any attacker-issued certificates (a review of certificates issued for .ve domains would be interesting).
Ideally, we should build an internet where we don't need to trust the network layer. A BGP route leak would be a performance/availability concern only. We're not there yet, but now is a great time to take the next step in that direction.
There is a nuance here for Cloudflare users that makes this problematic.
While analyzing the Jabber.ru incident (which used this exact BGP->TLS vector), I discovered that Cloudflare's "Universal SSL" actively injects permissive CAA records that override user-defined restrictions.
If a user sets a strict accounturi CAA record (RFC 8657) to lock issuance to their specific account—specifically to prevent BGP hijackers from getting a cert—Cloudflare's system automatically appends a wildcard record alongside it to keep their automation working. Because CAs accept any valid record, this effectively nullifies the protection.
It creates a situation where you think you have mitigated the BGP risk via DNS, but the vendor has silently reopened the door.
Attackers hijacking domains to get certificates issued are generally hijacking registrar accounts, which DNSSEC doesn't help with, which is probably one of the many reasons DNSSEC is so rarely deployed.
I'm not fixated on any particular argument, but the preceding comment offers network security advice as if it were best common practice, and it is not in fact that. That's all! Not a big thing.
I would be interested in your take; if you had to distrust the network, how would you protect HTTP, SMTP, DNS, and TLS certs? I suspect your answer isn't DNSSEC, but I'd be interested to hear what you would use instead. The European answer seems to be DNSSEC, considering adoption rates there. (edit: should be "includes" not "be", it's one of the tools they use).
We do have to distrust the network, which is partly why TLS cert validation now includes a bunch of mitigations around validation from multiple network positions, certificate transparency logs, etc.
DNSSEC adoption on major European properties is also quite low! Try a bunch of domains out (`host -t ds <domain>`). There are more in Europe, of course, but not very many, at least not major ones. My hypothesis, I think strongly supported: the more mature your security team, the more internal pushback against DNSSEC.
Sure, I'll do some homework for you. I just took the latest Tranco top million list (7N42X) and scanned the top thousand .cz domains. 61% of the top 100 .cz domains have DS records as do 50.6% of the top thousand .cz domains. That matches what others have been reporting and doesn't seem "quite low" to me.
If you're interested in talking about something other than DNSSEC, I would be interested in your thoughts here.
Considering that for most banners the "consent" is the easy option I assume a lot. People want to get rid of the banners.
However I claim the point of the bad UX is to make users angry and then have them complain about EU etc. "demanding" those. In order to weaken the regulation of tracking. If they are successful (and they are making progress) "no more cookie banners" is a lot better headlines than "more tracking"
The failure of the EU was to not write into (an updated version of the law) that setting a specific HTTP header means "no", and "no" means "no" not "show me a popup to ask" (i.e. showing a popup in such cases would not be allowed).
It wouldn't matter because most of the consent flows you see are already not compliant. The problem is a perpetual lack of enforcement even for the blatant breaches. An HTTP header wouldn't change the situation, websites would still ignore it and still get away with it.
The consent flows are good enough that the companies selling them can claim that they're compliant, and enforcement is slow, partly because there are so many things that are not 100% clear.
The header would be a relatively clear cut situation, also opening the path to private enforcement via NOYB & Co.
A mandatory header would get implemented on sites that even halfway try to comply, and it would be extra easy to enforce on fully malicious sites. I think it would be useful.
No, they're directly in violation. This is fully settled; it's just that some companies are counting on it not being "the thing that gets an enforcement action".
How is ease of opt out versus opt in objectively measured?
Most of the time both options are presented clearly and within a few pixels from each other, but opt-in is usually slightly more eye catching and/or more appealing. But the effort in terms of distance for mouse movement or number of clicks is the same. While that’s a design trick that will improve % of opt-in, how can it be argued that the opt-out was not as “easy”?
It is very common for there to be "accept all" and "more options" buttons where rejecting all requires multiple clicks via the latter. The sites which havea "Reject all" button right next to the "Accept all" one that's the same size and such aren't flagrantly violating the law.
> If the data subject’s consent is given in the context of a written declaration which also concerns other matters, the request for consent shall be presented in a manner which is clearly distinguishable from the other matters, in an intelligible and easily accessible form, using clear and plain language. Any part of such a declaration which constitutes an infringement of this Regulation shall not be binding.
> ... It shall be as easy to withdraw as to give consent.
Your example does appear muddy, but I also doubt any enforcement targetting such sites.
What however is extremely common is an "Accept all" vs "Manage settings" which opens up another panel, where there is still no "Reject all" option, and only various settings where you can "Save choices" which might or might not default to what you want. Such cases are obviously blatant rule violations, both in amount of clicks and obfuscation of consent.
In recent pop-ups, you are technically opted out by default(or at least that is how it is presented, I have not actually checked their cookie activity).
It is two clicks to confirm that choice and dismiss the pop-up versus one to accept all cookies but if you choose to interact with the site and ignore the pop-up instead, you are supposedly non-essential cookie free by default.
I have been on a call with a CMP where they got mad at me for not resetting our user's preferences and because our 'do not accept' was high due to the fact i refused to de-promote it via a dark pattern. I kid you not.
fwiw; looking at our stats for the past year:
No consent: 40.8%
Full Consent: 31%
Just closed the damn window: 28.1%
Went through the nightmare selector: 0.07%
Most of the sites use dark patterns in the banners, from not presenting decline option to hiding and renaming it to be unrecognizable. For example I make an effort in always picking Decline All option if available and the practice shows that I click on Allow All in about 20-30% of all banners, because it was impossible to avoid. So I safely assume that general population clicks Allow All even more.
Exactly, it is defined in the GDPR law that declining should be as easy and accessible as accepting. So all of those companies with dark patterns are breaking the law.
It's always those awful websites with a million popups, adverts, sites that reflow after 10 seconds, etc. They would be horrible to use even without the cookie banners.
As a parent of young children I've found that I need my phone on any time my children are not with me. Calls from school or day care don't always come from the same number, so I answer every call when my kids are in the care of others (but none otherwise).
Then practice keeping your phone in your pocket for increasingly long periods of time. You may need to build up to this and to develop some level of control.
Child care now requires parents to be readily available, especially to pick up children who are sick. A century ago, child care providers were expected to care for sick children until the parent arrived. Failure to be responsive would be a violation of the child care agreement.
Is this really true? There are millions of parents who are unreachable while working. A surgeon isn’t going to be able to leave in the middle of surgery to pick up their kid from child care.
My kid got sick at daycare one time when I was over an hour away. They just had to stay there while I worked my way back.
I don't think that's a counter example. Our day care requested sick kids to be picked up within an hour. A single late pickup of a sick kid due to traffic/distance would be handled normally. An unreachable parent who turns off their phone and is extremely delayed, especially on multiple occasions is a very different thing.
When we looked at in-home child care, one of the options was a nanny who would care for kids even when the kids were sick. So I'm sure the rapid-pickup-of-sick-kids policy isn't universal. However, our day care had that policy and we made sure we knew who was "on-call" to get kids when important meetings or work travel impacted our availability.
It's unfortunate that SO hasn't found a way to leverage LLMs. Lots of questions benefit from some initial search, which is hard enough that moderators likely felt frustrated with actual duplicates, or close enough duplicates, and LLMs seem able to assist. However I hope we don't lose the rare gem answers that SO also had, those expert responses that share not just a programming solution but deeper insight.
I think that SO leveraging LLMs implicitly. Like I'll always ask LLM first, that's the easiest option. And I'll only come to SO if LLM fails to answer.
reply