Hacker Newsnew | past | comments | ask | show | jobs | submit | Nextgrid's commentslogin

A leak of politicians' dirty habits should hopefully do it.

Container escapes exist. Now the question is whether the attacker has exploited it or not, and what the risk is.

Are you holding millions of dollars in crypto/sensitive data? Better assume the machine and data is compromised and plan accordingly.

Is this your toy server for some low-value things where nothing bad can happen besides a bit of embarrassment even if you do get hit by a container escape zero-day? You're probably fine.

This attack is just a large-scale automated attack designed to mine cryptocurrency; it's unlikely any human ever actually logged into your server. So cleaning up the container is most likely fine.


But the firewall wouldn't have saved them if they're running a public web service or need to interact with external services.

I guess you can have the appserver fully firewalled and have another bastion host acting as an HTTP proxy, both for inbound as well as outbound connections. But it's not trivial to set up especially for the outbound scenario.


No you're right, I didn't mean the firewall would have saved them, but just as a general point of advice. And yes a second VPS running opnSense or similar makes a nice cheap proxy and then you can firewall off the main server completely. Although that wouldn't have saved them either - they'd still need to forward HTTP/S to the main box.

A firewall blocking outgoing connections (except those whitelisted through the proxy) would’ve likely prevented the download of the malware (as it’s usually done by using the RCE to call a curl/wget command rather than uploading the binary through the RCE) and/or its connection to the mining server.

How many people do proper egress filtering though, even when running a firewall

Not that I’m disproving it but do you have a source? Companies say all kinds of things for hype and to attract investors, but it doesn’t necessarily make it true.

looking at their commits, there are about 300+ commits tagged with " Generated with https://claude.com/claude-code" attribution.

Just because AI tools are involved doesn't mean it's "Vibe coding".

It sure is a pretty good indicator, and if you underestimate human laziness you’re gonna have a bad time regardless.

Also looking at how much they’ve released and how fast and how they blog like they own the world (or design the website)

I used to look up to Posthog as I thought, wow this is a really good startup. They’re achieving a lot fast actually.

But turns out a lot was sloppy. I don’t trust them no more and would opt for another platform now.


The rules are fine and do prohibit this, it's their enforcement that's (intentionally) flawed.

Social media moderation has to balance "engagement" with the potential for bad PR or liability for the company. It turns out that content that is against the rules is also the one that generates the most engagement, so enforcing the rules as-is is bad for the bottom-line.

Thus for every piece of content that is potentially against the rules, the actual condition for removing it is whether the expected engagement potential outweighs the probability of someone rich/well-connected getting inconvenienced by it and how much inconvenience would it be. Content is only removed when the liability potential exceeds the profit potential.

At the beginning the reports were ignored because the system determined it is more profitable to stay up. I'm not sure what "his pleas to take it down" refers to, it would've likely been just his staff members flagging it with their personal accounts and those flags having very little weight. Eventually either someone managed to talk to a human and/or a letter to their legal department arrived, or the content achieving enough impressions to become a risk which caused the earlier flags to actually get reviewed by a competent human, at which point they realized what their liability was and quickly removed it.

You should expect to see an apology from their PR department soon and a promise they'll do better next time.


It's backpedaling but I don't think it's planning ahead to prevent a developer shortage - rather it's pandering to the market's increasing skepticism around AI and that ultimately the promised moonshot of AI obsoleting all knowledge work didn't actually arrive (at least not in the near future).

It's similar to all those people who were hyping up blockchain/crypto/NFTs/web3 as the future, and now that it all came to pass they adapted to the next grift (currently it's AI). He is now toning down his messaging in preparation of a cooldown of the AI hype to appear rational and relevant to whatever comes next.


"We were against this all along"

The party line will be: “we always advised using it if it as long as it helps productivity.”

Pointing out that it wasn’t always that will make you seem “negative.”


You are right, perfect amount of false humility and balance. The wage suppression is an accidental biproduct and not the intent. Collateral damage if you will.

This is an easy theory to prove; if AI was anywhere close to a senior engineer, we'd see the costs of software development drop by a corresponding amount or quality would be going up. Not to mention delivery would become faster. With LLMs being accessible to the general public I'd also expect to see this in the open-source world.

I see none of that happening - software quality is actually in freefall (but AI is not to blame here, this began even before the LLM era), delivery doesn't seem to be any faster (not a surprise - writing code has basically never been the bottleneck and the push to shove AI everywhere probably slows down delivery across the board) nor cheaper (all the money spent on misguided AI initiatives actually costs more).

It is a super easy bet to take with money - software development is still a big industry and if you legitimately believe AI will do 90% of a senior engineer you can start a consultancy, undercut everyone else and pocket the difference. I haven’t heard of any long-term success stories with this approach so far.


This is performative bullshit pandering to the increased skepticism around AI. He wouldn't be saying that if AI investment was still in full swing.

I do agree with him about AI being a boon to juniors and pragmatic usage of AI is an improvement in productivity, but that's not news, it's been obvious since the very beginnings of LLMs.


So it's performative when the head of AWS says it and not news. But it's not performative when you say it and people should have listened to you in the comments?

It's performative when you talk whatever the market wants to hear rather than sticking to an opinion (no matter how flawed it is). This behavior reminds me of the cryptobros that were hailing NFTs/web3 as the next best thing since sliced bread, and when that didn't came to pass quietly moved onto the next grift (AI) with the same playbook.

(also I’m just talking out of my ass on a tech forum under a pseudonym instead of going to well-publicized interviews)


The main cartridge (with the cable modem) was presumably heavily subsidized by the expected recurring revenue, which relies on the ephemeralness of the games. Offering RAM carts (even at cost) would threaten that revenue as people can stock up on games and cancel their subscription once they've built up their collection.

A lot of them got fooled by the caching; pages for signed-out users are cached heavily and those kept returning successful responses even if the actual backend server was down.

This site also got it right: https://downforeveryoneorjustme.com/hacker-news

I believe it's because they accept user reports.


Yes! Now we see what a difference it makes.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: