Hacker Newsnew | past | comments | ask | show | jobs | submit | ratorx's commentslogin

I think it depends on the scope and level of solution I accept as “good”. I agree that often the thinking for the “next step” is too easy architecturally. But I still enjoy thinking about the global optimum or a “perfect system”, even it’s not immediately feasible, and can spend large amounts of time on this.

And then also there’s all the non-systems stuff - what is actually feasible, what’s most valuable etc. Less “fun”, but still lots of potential for thinking.

I guess my main point is there is still lots to think about even post-LLM, but the real challenge is making it as “fun” or as easily useful as it was pre-LLM.

I think local code architecture was a very easy domain for “optimality” that is actually tractable and the joy that comes with it, and LLMs are harmful to that, but I don’t think there’s nothing to replace it with.


Team lead manages the overall direction of the team (and is possibly the expert on some portions), but for an individual subsystem a senior engineer might be the expert.

For work coming from outside the team, it’s sort of upto your management chain and team lead to prioritise. But for internally driven work (tech debt reduction, reliability/efficiency improvements etc) often the senior engineer has a better idea of the priorities for their area of expertise.

Prioritisation between the two is often a bit more collaborative and as a senior engineer you have to justify why thing X is super critical (not just propose that thing X needs to be done).

I view the goal of managers + lead as more balancing the various things the team could be doing (especially externally) and the goal of a senior engineer is to be an input to the process for a specific system they know most about.


I agree, but I think that input is limited to unopinionated information about the technical impact or user-facing impact of each task.

I don't think it can be said that senior engineers persuade their leaders to take one position or the other, because you can't really argue against a political or financial decision using technical or altruistic arguments, especially when you have no access to the political or financial context in which these decisions are made. In those conversations, "we need to do this for the good of the business" is an unbeatable move.


I guess this is also a matter of organisational policy and how much power individual teams/organisational units have.

I would imagine mature organisations without serious short/medium term existential risk due to product features may build some push back mechanisms to defend against the inherent cost of maintaining existing business (ie prioritising tech debt to avoid outages etc).

In general, it is a probably a mix of the two - even if there is a mandate from up high, things are typically arranged so that it can only occupy X% of a team’s capacity in normal operation etc, with at least some amount “protected” for things the team thinks are important. Of course, this is not the case everywhere and a specific demand might require “all hands on deck”, but to me that seems like a short-sighted decision without an extremely good reason.


In my 30 years in industry -- "we need to do this for the good of the business" has come up maybe a dozen times, tops. Things are generally much more open to debate with different perspectives, including things like feasibility. Every blue moon you'll get "GDPR is here... this MUST be done". But for 99% of the work there's a reasonable argument for a range of work to get prioritized.

When working as a senior engineer, I've never been given enough business context to confidently say, for example, "this stakeholder isn't important enough to justify such a tight deadline". Doesn't that leave the business side of things as a mysterious black box? You can't do much more than report "meeting that deadline would create ruinous amounts of technical debt", and then pray that your leader has kept some alternatives open.

It’s possible, but I think it’s typically used for ingress (ie same IP, but multiple destinations, follow BGP to closest one).

I don’t think I’ve seen a similar case for anycast egress. Naively, doesn’t seem like it would work well because a lot of the internet (eg non-anycast geographic load balancing) relies on unique sources, and Cloudflare definitely break out their other anycast addresses (eg they don’t send outbound DNS requests from 1.1.1.1).


Cloudflare actually does anycast for egress too, if that is what you meant: https://blog.cloudflare.com/cloudflare-servers-dont-own-ips-...


So reading the article you’re right, it’s technically anycast. But only at the /24 level to work around BGP limitations. An individual /32 has a specific datacenter (so basically unicast). In a hypothetical world where BGP could route /32s it wouldn’t be anycast.

I wasn’t precise, but what I meant was more akin to a single IP shared by multiple datacenters in different regions (from a BGP perspective), which I don’t think Cloudflare has. This is general parallel of ingress unicast as well, a single IP that can be routed to multiple destinations (even if on the BGP level, the entire aggregate is anycast).

It would also not explain the OP, because they are seeing the same source IP, but from many (presumably) different source locations whereas with the Cloudflare scheme each location would have a different source IP.


To my knowledge, any cast is very much a thing cloudflare uses.. It allows to split traffic per region, which, in the case of DDOS is a good thing.


To be clear, they definitely use ingress anycast (ie anycast on external traffic coming into Cloudflare). The main question was whether they (meaningfully) used egress anycast (multiple Cloudflare servers in different regions using the same IP to make requests out to the internet).

Since you mentioned DDOS, I’m assuming you are talking about ingress anycast?


It doesn't really matter if they're doing that for this purpose, though. Cloudflare (or any other AS) has no fine control of where your packets to their anycast IPs will actually go. A given server's response packets will only go to one of their PoPs. It's just that which one will depend on server location and network configuration (and could change at any time). Even if multiple of their PoPs tried to fetch forward from the same server, all but one would be unable to maintain a TCP connection without tunneling shenanigans.

Tunneling shenanigans are fine for ACKs, but it's inefficient and therefore pretty unlikely that they are doing this for ingress object traffic.


For POSIX: I leave Bash as the system shell and then shim into Fish only for interactive terminals. This works surprisingly well, and any POSIX env initialisation will be inherited. I very rarely need to do something complicated enough in the REPL of the terminal and can start a subshell if needed.

Fish is nicer to script in by far, and you can keep those isolated with shebang lines and still run Bash scripts (with a proper shebang line). The only thing that’s tricky is `source` and equivalents, but I don’t think I’ve ever needed this in my main shell and not a throw-away sub shell.


I often write multi-line commands in my zsh shell, like while-loops. The nice thing is that I can readily put them in a script if needed.

I guess that somewhat breaks with fish: either you use bash -c '...' from the start, or you adopt the fish syntax, which means you need to convert again when you switch to a (bash) script.


I guess my workflow for this is more fragmented. Either I’m prototyping a script (and edit and test it directly) or just need throwaway loop (in which case fish is nicer).

I also don’t trust myself to not screw up anything more complex than running a command on Bash, without the guard rails of something like shellcheck!


I used to do it this way, but then having the mentally switch from the one to the other became too much of a hassle. Since I realized I only had basic needs, zsh with incremental history search and the like was good enough.

I don't care for mile-long prompts displaying everything under the sun, so zsh is plenty fast.


Wha do you mean by “fixing this” or it being a design flaw?

I agree with the point about sequential allocation, but that can also be solved by something like a linter. How do you achieve compatibility with old clients without allowing something similar to reserved field numbers to deal with version skew ambiguity?

I view an enum more as an abstraction to create subtypes, especially named ones. “Enumerability” is not necessarily required and in some cases is detrimental (if you design software in the way proto wants you to). Whether an enum is “open” or “closed” is a similar decision to something like required vs optional fields enforced by the proto itself (“hard” required being something that was later deprecated).

One option would be to have enums be “closed” and call it a day - but then that means you can never add new values to a public enum without breaking all downstream software. Sometimes this may be justified, but other times it’s not something that is strictly required (basically it comes down to whether an API of static enumerability for the enum is required or not).

IMO the Go way is the most flexible and sane default. Putting aside dedicated keywords etc, the “open by default” design means you can add enum values when necessary. You can still do dynamic closed enums with extra code. Static ones are still not possible though without codegen. However if the default was closed enums, you wouldn’t be able to use it when you wanted an open one, and would have it set it up the way it does now anyway.


Not sure what GP had in mind, but I have a few reasons:

Cherry picks are useful for fixing releases or adding changes without having to make an entirely new release. This is especially true for large monorepos which may have all sorts of changes in between. Cherry picks are a much safer way to “patch” releases without having to create an entirely new release, especially if the release process itself is long and you want to use a limited scope “emergency” one.

Atomic changes - assuming this is related to releases as well, it’s because the release process for the various systems might not be in sync. If you make a change where the frontend release that uses a new backend feature is released alongside the backend feature itself, you can get version drift issues unless everything happens in lock-step and you have strong regional isolation. Cherry picks are a way to circumvent this, but it’s better to not make these changes “atomic” in the first place.


Google’s SRE STPA starts with a similar model. I haven’t read the external document, but my team went through this process internally and we considered the hazardous states and environmental triggers.

https://sre.google/stpa/teaching

Disclaimer: currently employed by Google, this message is not sponsored.


I’m struggling to understand the chain of events, because the story starts midway. Is the claim that JUST the 2FA code was enough to pwn everything with no other vulnerabilities? If that’s the case, then that’s a way bigger problem.

Or (given the password database link at the end), is the sequence:

1) various logins are pwned (Google leak or just other logins, but using gmail as the email - if just other things, then password reuse?)

2) attacker has access to password

3) attacker phishes 2FA code for Google

4) attacker gains access to Google account

5) attacker gains access to Google authenticator 2FA codes

6) attacker gains access to stored passwords? (Maybe)

7) attacker gains the 2nd factor (and possible the first one, via the chrome password manager?) to a bunch of different accounts. Alternatively, more password reuse?

I guess the key question for me, was there password reuse and what was the extent, or did this not require that?

Disclaimer: work at Google, not related to security, opinions my own.


I think the attacker had my password, and they just needed a recovery method, which was the code I read over the phone.

I have no idea how they had my password, I never share passwords or use the same password. But I hadn’t changed my Google password in a while.


No, if they had had the password they wouldn't have needed to do all of that. They could have just logged in, perhaps just needed the 2FA code. However, you say that you gave them both enhanced security codes (I'm guessing this was a gmail backup key), and you also gave them the 2FA SMS code. These are the only two things you need to take over any gmail account, and it doesn't require knowing the password. It's just purely social engineering.

The only question mark is the email from google. It sounds like it was a scam email, so it would be interesting to know whether/how it was spoofed.


Gotcha, thanks for clarifying!

And did you have passwords using chrome password manager as well (which were also compromised by the Google account access, and this is how they got access to e.g. Coinbase?), or did they get passwords through some other means and just needed 2FA?


I did have saved passwords in Chrome password manager but they were old. My guess is that the attacker used Google SSO on Coinbase (e.g., "sign in with Google"), which I have used in the past. And then they opened up Google's Authenticator app, signed in as me, and got the auth code for Coinbase.

By enabling cloud-sync, Google has created a massive security vulnerability for the entire industry. A developer can't be certain that auth codes are a true 2nd factor, if the account email is @gmail.com for a given user because that user might be using Google's Authenticator app.


Hmm, I see what you mean, although technically this is still a 2 factor compromise (Google account password + 2FA code). Just having one or the other wouldn’t have done anything. The bigger issue is the contagion from compromising a set of less related two factors (the email account, not the actual login).

Specifically, the most problematic is SSO + Google authenticator. Just @gmail + authenticator is not enough, you need to also store passwords in the Google account too and sync them.

Although, this is functionally the same as using a completely unrelated password manager and storing authenticator codes there (a fairly common feature) - a password manager compromise leads to a total compromise of everything.


You used Google SSO for Coinbase?


Did you reuse that password on another site?

I don’t see how this happens if you use strong passwords without reuse.


500+ comments in this thread and there's still no information as to what the hella actually happened.

I sleep fine at night, this is a Hallmark of these "omg I got owned and it could happen to you!" posts that never quite add up.


Passwords don't matter if you have access to the inbox and 2fa codes, you can just reset passwords.


But if you get access to the inbox, then you have a compromised device or the password via some other means right?

Inbox access is a fairly big compromise, even without the 2FA codes.


Inbox is the biggest compromise of them all IMO. I realized this a decade ago and use a different email for every account that I have. None of them have anything to do with my name in any way, I use 4 random words to create new email for any new account that I need. Accidental takeover of any one account does not lead to total take over of my life :)


You're right, seems they already had his inbox credentials.


No, it sounds like they got him to create backup codes, which (along with SMS 2FA code, which he also gave them), that is all they need to take over the gmail account. Job done.


These don’t prevent censorship necessarily, they will give you a way to detect it at best.

DNSSEC gives you the ability to verify the DNS response. It doesn’t protect against a straight up packet sniffer or ISP tampering, it just allows you to detect that it has happened.

DoT/DoH are better, they will guarantee you receive the response the resolver wanted you to. And this will prevent ISP-level blocks. But the government can just pressure public resolvers to enact the changes at the public resolver level (as they are now doing in certain European countries).

You can use your own recursive, and this will actually circumvent most censorship (but not hijacking).

Hijacking is actually quite rare. ISPs are usually implementing the blocks at their resolver (or the government is mandating that public resolvers do). To actually block things more predictably, SNI is already very prevalent and generally a better ROI (because you need to have a packet sniffer to do either).


DNSSEC itself won't help you alone, but the combination of DNSSEC + ODoH/DoT will. Without DNSSEC, your (O)DoH/DoT server can mess with the DNS results as much as your ISP could.

Of course you will need to configure your DNS server/client to do local validation for this, and at most it'll prevent you from falling for scams or other domain foolery.


In practice, DNSSEC won't do anything for ordinary Internet users, because it runs between recursive resolvers and authority servers, and ordinary users run neither: they use stub resolvers (essentially, "gethostbyname") --- which is why you DHCP-configure a DNS server when you connect to a network. If you were running a recursive resolver, your DNS server would just be "127.0.0.1".

The parent comment is also correct that the best DNSSEC can do for you, in the case where you're not relying on an upstream DNS server for resolution (in which case your ISP can invisibly defeat DNSSEC) is to tell you that a name has been censored.

And, of course, only a tiny fraction of zones on the Internet are signed, and most of them are irrelevant; the signature rate in the Tranco Top 1000 (which includes most popular names in European areas where DNSSEC is enabled by default and security-theatrically keyed by registrars) is below 10%.

DNS-over-HTTPS, on the other hand, does decisively solve this problem --- it allows you to delegate requests to an off-network resolver your ISP doesn't control, and, unlike with DNSSEC, the channel between you and that resolver is end-to-end secure. It also doesn't require anybody to sign their zone, and has never blown up and taken a huge popular site off the Internet for hours at a time, like DNSSEC has.

Whatever else DNSSEC is, it isn't really a solution for the censorship problem.


Obviously you need to enable local verification for DNSSEC to do anything in the first place, otherwise the DNS server can just lie about the DNSSEC status. If someone is manually configuring a DoH resolver, they probably have a toggle to do DNSSEC validation nearby.

DNSSEC doesn't prevent censorship, but it does make tampering obvious. Moving the point of trust from my ISP to Cloudflare doesn't solve any problems, Cloudflare still has to comply with national law. DoH is what you use to bypass censorship; DNSSEC is what you use to trust these random DNS servers you find on lists on Github somewhere.

A bit over half the websites I visit use signed zones. All banking and government websites I interact with use it. Foreign websites (especially American ones) don't, but because of the ongoing geopolitical bullshit, American websites are tough to trust even when nobody is meddling with my connection, so I'm not losing much there. That's n=1 and Americans will definitely not benefit because of poor adoption, but that only proves how much different kinds of "normal internet user" there are.


I think we're basically on the same page. With respect to who is or isn't signed, I threw this together so we could stop arguing about it in the abstract on HN:

https://dnssecmenot.fly.dev/


It does say that they collect this information in their “Data and Privacy Policy”. Specifically section 2.2 (Data Collected): https://quad9.net/privacy/policy/

Which policy are you referring to that implies they don’t?

Also I think you are assuming they store query logs and then aggregate this data later. It is much simpler just to maintain an integer counter for monitoring as the queries come in, and ingest that into a time series database (not sure if that’s what they actually do). Maybe it needs to be a bit fancier to handle the cardinality of DNS names dimension, but re-constructing this from logs would be much more expensive.


The section you mentioned does not say anything about having counters for labels. It only mentions that they record "[t]he times of the first and most recent instances of queries for each query label".


Well, the counters aren't data collected, they are data derived from the data they do collect. The privacy policy covers collection.

EDIT: I see they went out of their way to say "this is the complete list of everything we count" and they did not include counters by label, so I see your point!


I don't see how that is compatible with 2.2. They don't say anything about counters per label. It says counter per RR type, and watermarks of least and most recent timestamps by label, not count by label.

If an organization is going to be this specific about what they count, it implies that this is everything they count, not that there may also be other junk unmentioned.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: