Hacker Newsnew | past | comments | ask | show | jobs | submit | more beardedwizard's commentslogin

What are you actually expecting an average Israeli who does not agree with this to do? This comment strikes me as wild considering the exact same thing is playing out in America right now, and a bunch of people are making up their minds about "Americans" and what they stand for.

The same has been true for Iran, only up until now (and probably still) we have always had a more nuanced discussion - its the Iranian government, not the people of Iran.

Come on, the government of many countries does not necessarily represent the people.


Israel is supposed to be a democratic state. If the average Israeli disagrees with this they can speak up. The only voices we are hearing now are those who support it's current activities. Those who oppose are fewer and quieter.


what evidence do you have to support that claim?

I'm also baffled by the suggestion that democracy truly represents a majority and the apparent belief that dissent is quickly processed and rectified by democracy. Which country do you think shows this is working well?


It might be true that I am in a bubble and I am only hearing voices supporting these atrocities.

Democracy need not represent the majority, but if it works against the majority without any repercussions then who is to blame? Will the leadership be held accountable?

This war was started because the government knew they can get away with it. Every citizen is complicit in every crime committed by their government. Don't the citizens enjoy the fruits of crime even after claiming to oppose the actions of their government?


What specifically do you want individual citizens to do? Are you yourself complicit in everything your government does? Do you even know what they do?

Israelis are protesting, for better or worse this is what democracy looks like.

Another question to ask, does every Russian support the war in Ukraine? What can they do about it?


Yes, I am complicit in the crimes of my government. I am helpless do much, but the crime must be acknowledged. We are a part of the system, no sense in burying our head in the sand.

Only when a crime is acknowledge, we can talk about punishments. Will Israeli people not profit from this war? Protests will have some teeth if steps be taken so this will not repeat itself. I don't see this happening.

Look at USA, war after war. Presidents are blamed but not punished and the population enjoys the economical hegemony that is the fruit of war.


People aren't speaking up because it's only a democracy on paper. If you're too vocal about your opposition to Israel you will be taken care of.


The problem with this take is that the polls show a strong support for all those things that the Israeli government doing in Gaza among its citizens. That is, the average Israeli does agree. I don't think that the minority that disagrees is to blame, but they also clearly cannot meaningfully speak for the nation anymore.

In a similar vein, I'm ethnically Russian and a Russian citizen. I don't support the Russian invasion of Ukraine in any way, shape, or form, and I don't think that I am responsible for it as a Russian. However, it is also clear to me that the majority of Russians do support it (or at least think that it's fine), and on that basis I don't consider myself to be a part of that nation anymore, regardless of ethnicity.


> What are you actually expecting an average Israeli who does not agree with this to do?

Funny you say this because you don’t have to look far for people saying that “Gazans deserve what’s happening” because the average Gazan should fight back against Hamas.


Some things to consider:

* The majority of the Palestinian population are minors (< 18)

* The last nationwide election in Palestine was 2006

In other words, the last time an election was held, the majority of Palestinians weren't yet born, let alone old enough to vote in it. So, it's difficult to hold the Palestinian people en masse responsible for Hamas in the same way we'd hold Israelis responsible for their current government, who held their last election in 2022.


The same thing has been said generally about Muslims and Islamic terror organizations.

Well anyway, it is still crazy to me that somebody is making a decision about the entire population of a country based on the governments actions in 2025.


What is an SM scene


The ninja party was not even great.


>The ninja party was not even great.

I believe you.

By the time I was cool enough to have my pick of parties, I ended up having a massive panic attack during the crowd crush at the bar at some Rapid7 event and ending up pissing off the person who got me the ticket by leaving after 30 minutes to go buy a Manhattan at some side bar in the casino rather than wait in line 45 minutes for a beer, then spend another 45 minutes trying to wiggle away, only to start wiggling back.


Apple recently published a paper that seems to disagree and plainly states it's just pattern matching along with tests to prove it.

https://machinelearning.apple.com/research/illusion-of-think...


Anthropic has done much more in depth research actually introspecting the circuits: https://transformer-circuits.pub/2025/attribution-graphs/bio...


I'm having a hard time taking apple seriously, when they have don't even have a great llm.

https://www.techrepublic.com/article/news-anthropic-ceo-ai-i... Anthropic CEO: “We Do Not Understand How Our Own AI Creations Work”. I'm going to lean with Anthropic on this one.


I guess I prefer to look at empirical evidence over feelings and arbitrary statements. AI ceos are notoriously full of crap and make statements with perverse financial incentives.


> I have a hard time taking your claim about rotten eggs seriously when you're not even a chicken.


That's really not true. Context is one strategy to keep a models output constrained, and tool calling allows dynamic updates to context. Mcp is a convenience layer around tool calls and the systems they integrate with


I think you are drawing the wrong conclusion - users cannot be mindful of clicks, we should live in a world where click is assumed and go from there.


> users cannot be mindful of clicks

Why not?



A master class on how to say exactly nothing.


This is the primary failure of data platforms from my perspective. You need too many 3rd parties/partners to actually get anything done with your data and costs become unbearable.


The bummer about lots of supply chain work is that it does not address the attacks we see in the wild like xz where malicious code was added at the source, and attested all the way through.

There are gains to be had through these approaches, like inventory, but nobody has a good approach to stopping malicious code entering the ecosystem through the front door and attackers find this much easier than tampering with artifacts after the fact.


Actually this is not quite true, in the xz hack part of the malicious code was in generated files only present in the release tarball.

When I personally package stuff using Nix, I go out of my way to build everything from source as much as possible. E.g. if some repo contains checked in generated files, I prefer to delete and regenerate them. It's nice that Nix makes adding extra build steps like this easy. I think most of the time the motivation for having generated files in repos (or release tarballs) is the limitations of various build systems.


The xz attack did hit nix though. The problem is no one is inspecting the source code. Which is still true with nix, because everyone writes auto bump scripts for their projects).

If anyone was serious about this issue, we'd see way more focus on code signing and trust systems which are transferable: i.e. GitHub has no provision to let anyone sign specific lines of a diff or a file to say "I am staking my reputation that I inspected this with my own eyeballs".


Is it really stacking ones reputation? Think about it: If everyone is doing it all the time, an overlooked something is quickly dismissed as a mistake that is bound to happen sooner or later. Person X is reviewing so much code and does such a great job usually, but now they overlooked that one thing. And they even admitted their mistake. Surely they are not bad.

I think it would quickly fade out. What are we going to do, if even some organization for professional code reviews signs off the code but after 5y in the business they make 1 mistake? Are we no longer going to trust them from that day on?

I think besides signing code, there need to be multiple pairs of eyeballs looking at it independently. And even then nothing is really safe. People get lazy all the time. Someone else surely has already properly reviewed this code. Let's just sign it and move on! Management is breathing down our necks and we gotta hit those KPI improvements ... besides, I gotta pick up the kids a bit earlier today ...

Don't let perfect be the enemy of good. There is surely some benefit, but one can probably never be 100% sure, unless one goes into mathematical proofs and understands them oneself.


It's unlikely that multiple highly-regarded reviewers would all make the same mistake simultaneously (unless all their dev machines got compromised).

Ultimately it's about making the attacker's life difficult. You want to raise the cost of planting these vulnerabilities, so attackers can pull it off once every few decades, instead of once every few years.


Yeah, the more I read through actual package definitions in nixpkgs, the more questions I have about selling this as some security thing. nixpkgs is very convenient, I'll give it that. But a lot of packages have A LOT of unreviewed (by upstream) patches applied to them at nix build time. This burned Debian once, so I expect it to burn nixpkgs someday too. It's inevitable.

I do think reproducible builds is important. It lets people that DO review the source code trust upstream binaries, which is often convenient. I made this work at my last job... if you "bazel build //oci/whatever-image" you end up with a docker manifest that has the same sha256 as what we pushed to Docker Hub. You can then read all the code and know that at least that's the code you're running in production. It's neat, but it's only one piece of the security puzzle.


(Effectively) nobody will ever be serious about this issue unless it were somehow mandated for everyone. Anyone who was serious about it would take 3x as long to develop anything compared to their competitors, which is not a viable option.


Yeah ultimately it's a public goods problem.

I wonder if a "dominant assurance contract" could solve this: https://foresight.org/summary/dominant-assurance-contracts-a...


This is why distros with long release cycles are better. Usually more time for eyeballs to parse things.

Take Debian for example, the commit never made it to stable.


> provision to let anyone sign specific lines of a diff

Good idea that should be implemented by git itself, for use by any software forge like github, gitlab, codeberg, etc.

https://git-scm.com/book/en/v2/Git-Tools-Signing-Your-Work


While the code itself did get to nix, the exploit was not functional specifically due to how nix works. That doesn't mean that a more sophisticated attack couldn't succeed though. It was mostly luck that kept it from affecting NixOS.


Your preference to compile your backdoors does not really fix the problem of malicious code supply.

I have this vague idea to fingerprint the relevant AST down to all syscalls and store it in a lock file to have a better chance of detection. But this isnt a true fix either.


Yes you are right, what I am proposing is not a solution by itself, it's just a way to be reasonably confident that _if you audit the code_, that's going to be the actual logic running on your computer.

(I don't get the value of your AST checksumming idea over just checksumming the source text, which is what almost all distro packages do. I think the number of changes that change the code but not the AST are negligible. If the code (and AST) is changed, you have to audit the diff no matter what.)

The more interesting question that does not have a single good answer is how to do the auditing. In almost all cases right now the only metric you have is "how much you trust upstream", in very few cases is actually reading through all the code and changes viable. I like to look at how upstream does their auditing of changes, e.g. how they do code review and how clean is their VCS history (so that _if_ you discover something fishy in the code, there is a clean audit trail of where that piece of code came from).


> it's just a way to be reasonably confident that _if you audit the code_

Why do we pretend this is easy many times in conversation about dependencies? It's as if security bugs in dependencies were calling out at us, like a house inspector looking at a huge hole on the floor of the house. But it's not like that at all, most people would inspect 99.9% of CVEs and read the vulnerable code and accept it. As did the reviewers in the open-source project, who know that codebase much more than someone who's adding a dependency because they want to do X faster. And they missed it or the CVE wouldn't be there, but somehow a random dev looking at it for the first time will find it?

In fact, if to use dependencies I have to read and understand the code and validate it, the number of dependencies I'd use would go to zero. And many things I would be locked out of doing, because I'm too dumb to understand them, so I can't audit the code, which means I'm definitely too dumb to replicate the library myself.

Asking people to audit the code in hopes of finding a security bug is a big crapshoot. The industry needs better tools.


This makes perfect sense on a beefy super powered dev laptop with the disk space upgrade on an unsaturated symmetrical gig connection.

I’m only exaggerating a little bit here. Nix purism is for those who can afford the machines to utilize it. Doing the same on old hardware is so slow it’s basically nontenable


This shows only a surface level understanding of what nix provides here.

One of the biggest benefits is the binary cache mechanism, which allows you to skip building something but have the effective result of the build pulled from the cache. It's classical distributions that make building from source only possible for those who can afford the infrastructure, nix is what enables the rest of us to do so.


The nix cache exists for a reason.


Glossing over some details, the build artifact and build definition are equivalent in Nix. If you know the build definition, you can pull the artifact from the cache and be assured that you have the same result.


>When I personally package stuff using Nix, I go out of my way to build everything from source as much as possible. E.g. if some repo contains checked in generated files, I prefer to delete and regenerate them. It's nice that Nix makes adding extra build steps like this easy. I think most of the time the motivation for having generated files in repos (or release tarballs) is the limitations of various build systems.

You know what would be really sweet?

Imagine if every time a user opted to build themselves from source, a build report was by default generated and sent to a server alongside the resulting hashes etc. And a diff report gets printed to your console.

So not only are builds reproducible, they're continuously being reproduced and monitored around the world, in the background.

Even absent reproducibility, this could be a useful way to collect distribution data on various hashes, esp. in combination w/ system config info, to make targeted attacks more difficult.


I think a big part of the push is just being able to easily & conclusively answer "are we vulnerable or not" when a new attack is discovered. Exhaustive inventory already is huge.


i read somewhere go has a great package for this that checks statically typed usage of the vuln specific functions not whole package deps



ty ty exactly what I was thinking

does something like this exist for other languages like rust, python or js?


I don't think the Rust ecosystem has that at this time. They're annotating the vulnerabilities with affected functions, but as far as I know nobody's written the static analysis side of it.

https://github.com/rustsec/rustsec/issues/21

Python and JS might be so dynamic that such static analysis just isn't as useful.


For Rust, the advisory database cargo-audit uses (https://github.com/RustSec/advisory-db/) does track which functions are affected by a cve (if provided). I'm not sure if the tool uses them though.


I run a sw supply chain company (fossa.com) -- agree that there's a lot of low hanging gains like inventory still around. There is a shocking amount of very basic but invisible surface area that leads to downstream attack vectors.

From a company's PoV -- I think you'd have to just assume all 3rd party code is popped and install some kind of control step given that assumption. I like the idea of reviewing all 3rd party code as if its your own which is now possible with some scalable code review tools.


Those projects seem to devolve into a boil the ocean style projects and tend to be viewed as intractable and thus ignorable.

In the days everything was http I use to set a proxy variable and have the proxy save all downloaded assets to compair later, today I would probably blacklist the public CAs and do an intercept, just for the data of what is grabbing what.

Fedramp was defunded and is moving forward with a GOA style agile model. If you have the resources I would highly encourage you to participate in conversations.

The timelines are tight and they are trying to move fast, so look into their GitHub discussions and see if you can move it forward.

There is a chance to make real changes but they need feedback now.

https://github.com/FedRAMP


+1, I think you have to assume owned as well and start defending from there. Companies like edera are betting on that, but sandbox isn't panacea, you really need some way to know expected behavior.


When you have so many dependencies that you need to create complex systems to manage and "secure" the dependencies. The problem is that you have too many dependencies, you are relying on too many volunteer work, and you are demanding too many features while paying for too little.

The professional solution is to PAY for your Operating System, and rely on them to secure it. Whether it be to Microsoft or to Red Hat. You KNOW it's the right thing to do, and this is overintellectualizing your need to have a gratis operating system and charge non-gratis prices to your clients in turn.


How does that solve the problem? Both Microsoft and IBM/Red Hat have shipped backdoored code in the past and will no doubt do so again. At most you might be able to sue them for a refund of what you paid them, at which point you're no better off than if you'd used a free system from the start.


I couldn't disagree more with everything you've said.


But this is the solution the most cutting edge llm research has yielded, how do you explain that? Are they just willfully ignorant at OpenAI and anthropic? If fine tuning is the answer why aren't the best doing it?


I'd guess the benefit is that it's quicker/easier to experiment with the prompt? Claude has prompt caching, I'm not sure how efficient that is but they offer a discount on requests that make use of it. So it might be that that's efficient enough that it's worth the tradeoff for them?

Also I don't think much of this prompt is used in the API, and a bunch of it is enabling specific UI features like Artifacts. So if they re-use the same model for the API (I'm guessing they do but I don't know) then I guess they're limited in terms of fine tuning.


Prompt caching is functionally identical to snapshotting the model after it processed the prompt. And you need the KV cache for inference in any case so it doesn't even cost extra memory to keep it around, if every single inference task is going to have the same prompt suffix.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: