Hacker Newsnew | past | comments | ask | show | jobs | submit | dogman144's commentslogin

Was fortunate to talk to a security lead who built the data-driven policing network for a major American city that was an early adopter. ALPR vendors like Flock either heavily augment and/or anchor the tech setups.

What was notable to me is the following, and it’s why I think a career spent on either security researching, or going to law school and suing, these vendors into the ground over 20 years would be the ultimate act of civil service:

1. It’s not just Flock cams. It’s the data eng into these networks - 18 wheeler feed cams, flock cams, retail user nest cams, traffic cams, ISP data sales

2. All in one hub, all searchable by your local PD and also the local PD across state lines who doesn’t like your abortion/marijuana/gun/whatever laws, and relying on:

3. The PD to setup and maintain proper RBAC in a nationwide surveillance network that is 100%, for sure, no doubt about it (wait how did that Texas cop track the abortion into Indiana/Illinois…?), configured for least privilege.

4. Or if the PD doesn’t want flock in town, they reinstall cameras against the ruling (Illinois iirc?) or just say “we have the feeds for the DoT cameras in/out of town and the truckers through town so might as well have control over it, PD!”

Layer the above with the current trend in the US, and 2025 model Nissan uploading stop-by-stop geolocation and telematics to cloud (then, sold into flock? Does even knowing for sure if it does or doesn’t even matter?)

Very bad line of companies. Again all is from primary sources who helped implement it over the years. If you spend enough time at cybersecurity conferences you’ll meet people with these jobs.


As someone who has thought about, planned, and implemented a lot of RBAC... I would never trust the security of a system with RBAC at that level.

And to elaborate on that -- for RBAC to have properly defined roles for the right people and ensure that there's no unauthorized access to anything someone shouldn't have access to, you need to know exactly which user has which access. And I mean all of them. Full stop. I don't think I'm being hyperbolic here. Everyone's needs are so different and the risks associated to overprovisioning a role is too high.

When it's every LEO at the nation level that's way too many people -- it is pretty much impossible without dedicated people whose jobs it is to constantly audit that access. And I guarantee no institution or corporation would ever make a role for that position.

I'm not even going to lean into the trustworthiness and computer literacy of those users.

And that's just talking about auditing roles, never mind the constant bug fixes/additions/reductions to the implementation. It's a nightmare.

Funny enough, just this past week I was looking at how my company's roles are defined in admin for a thing I was working on. It's a complete mess and roles are definitely overprovisioned. The difference is it's a low-stakes admin app with only ~150 corporate employees who access it. But there was only like 8 roles!

Every time you add a different role, assign it to each different feature, and then give that role to a different user, it compounds.

I took your comment at face value but I hope to god that Flock at least as some sort of data/application partitioning that would make overprovisioning roles impossible. Was your Texas cop tracking an abortion a real example? Because that would be bad. So so bad.


>Was your Texas cop tracking an abortion a real example? Because that would be bad. So so bad.

https://www.eff.org/deeplinks/2025/05/she-got-abortion-so-te...


It always starts with "we just give developers in project access to things in project and it all be nice and secure, we will also have separate role for deploy so only Senior Competent People can do it.

Then the Senior Competent Person goes on vacation and some junior needs to run a deploy so they get the role.

The the other project need a dev from different project to help them.

Then some random person need something that has no role for it so they "temporarily" gets some role unrelated to his job.

Then project changes a manager but the old one is still there for the transition

And nobody ever makes a ticket to rescind that access

And everything is a mess


...and "the fix" that companies usually resort to is "use it or lose it" policies (e.g. you lose your role/permission after 30 days of non-use). So if you only do deployments for any given thing like twice a year, you end up having to submit a permissions request every single time.

No big deal, right? Until something breaks in production and now you have to wait for multiple approvals before you can even begin to troubleshoot. "I guess it'll have to stay down until tomorrow."

The way systems like this usually get implemented is there's an approval chain: First, your boss must approve the request and then the owner of the resource. Except that's only the most basic case. For production systems, you'll often have a much more complicated approval chain where your boss is just one of many individuals that need to approve such requests.

The end result is a (compounding) inefficiency that slows down everything.

Then there's AI: Management wants to automate as much as possible—which is a fine thing and entirely doable!—except you have this system where making changes requires approvals at many steps. So you actually can't "automate all the things" because the policy prevents it.


To add to that, the roles also need to be identified.

When some obscure thing breaks you either need to go on a quest to understand which are all the roles involved in fixing it, or send a much vaguer "let me do X and Y" request to the approval chain and have them figure it out on their end.

And as the approval agents aren't the ones fixing the issue, it's a back and forth of "can you do X?" "no, I'm locked at Y" "ok. then how about now ?"

Overprovisioning at least some key people is a fatality.


This is the part that doesn’t get enough attention. The real risk isn’t any single vendor, it’s the aggregation layer. Once ALPR, retail cams, traffic cams, ISP data, and vehicle telematics all land in one searchable system, the idea that this will be perfectly RBAC’d and jurisdictionally contained is fantasy. At that point it’s not policing tech, it’s a nationwide surveillance substrate held together by policy promises.


I’ve been in security for a while and I increasingly think understanding what the future looks like under this threat model is about the only security research that really matters fully above the rest (many topics also very important in their own ways).

The state change is just so significant and so under discussed because you learn about it via making an effort in a cybersec career, hitting conferences very years, eventually lucking out with who you met for a beer, and so on.

So how do policy leaders trying to understand this stand a chance at understanding it? How do local PD chiefs understand what they’re bringing in, who I really do believe deserve the benefit of the doubt wrt positive intentions?

There is really no counter-voice to an incredibly capable nationwide surveillance network that’s been around for at least 10-15 years. The EFF doesn’t really count because the EFF complains about these things, SEN Wyden writes a memo, and that seems to be the accepted scope of the work..

Just like man… the bill of rights… it’s a thing! Insane technology.


In other words it’s the telescreen from 1984.


The problem goes even deeper than messy RBAC in a database. This story showed that the system's brains are pushed to the edge, and if you gain access to the device, you don't even need the central police database. You get a local, highly intelligent agent working autonomously. This breaks the traditional threat model where we worry about "someone leaking the database"; here, the camera itself becomes an active reconnaissance tool. It turns out that instead of hacking a complex, (hopefully) secured cloud, you just need to find a smart eye like this with default settings, and you already have a personal spy at an intersection, bypassing any police access protocols


Now you have scale with ai hardware becoming cheaper and software incentives aligning.


I always thought that show "person of interest" was a bit far fetched. how could one system have access to that much data? privacy concerns would surely stop it.


You'd think so, but everytime a crime is solved by flock or the like, people keep celebrating it and using it as a justification.

It reminds me of this meme: https://www.reddit.com/r/Cyberpunk/comments/sa0eh3/dont_crea...

There are few reasons people probably keep building on this topic: 1. Eventually someone will do this anyway. 2. Thus, it shall be mine - I for sure will handle data better than anyone else can, respecting all sorts of guardrails etc. 3. company ipos, founder leaves, things happen.


Along with all the cop shows I'm thinking it's almost intentional at this point to normalize things.


It’s the entire reason some shows and movies exist. The Pentagon, CIA and other agencies routinely and openly assist hundreds of films and TV shows with equipment, locations and expertise in exchange for script changes that protect U.S. military and intelligence reputations.


The very first cop show, Dragnet, was explicitly a PR move to rehab the image of the police in the public's imagination. Every cop show since has been propaganda. Even shows where the police are not necessarily the "good guys", like The Shield or even Chicago PD, normalizes police brutality and the flaunting of basic constitutional laws because those dastardly bad guys have to be stopped at all costs.

I enjoy some of these shows myself but it is sometimes crazy how blatant they are about it.


The Wire was very good at showing the police as the villains, but it also instilled a lot of pessimism into the audience because said villains got away with damn near everything. Jimmy and Ellis probably sent more people to the hospital or the morgue than anyone else in the show (either directly or indirectly), but neither one got more than a few days of unpaid leave and a reassignment as a consequence. It also undercuts itself by having Ellis become probably the most respectable person in the cast and having all of the cast tell Jimmy he's not to blame for multiple shootings, destroying both families he's built, and even framing multiple innocent people with life sentences.

So even the ones that try to buck the trend end up following it.


It's definitely. Notice how after the 1994 Crime Bill was put into effect you had a large wave of shows and movies that increasingly depicted police as tools of the state rather than as protectors of the public. The fact that police-centered media exploded in ever larger shockwaves after that, the Atlanta Centennial Olympic Park Bombing, 9/11, and the deaths of Trayvon Martin and George Floyd was no coincidence. Law & Order, NYPD Blue, NCIS, Chicago PD, and Blue Bloods each correspond to each of those periods. The shows and movies are designed to make the abusive and destructive actions of the police look gallant. The police themselves actually advocate on many of them in order to sensationalize depictions or manipulate points of view so that they can then take them and use them as emotional appeals when the public criticizes policing.

The name "Law & Order" is a blatant example of this, as it's a phrase used by Richard Nixon during his campaign in 1968, and was widely repeated when he created justifications for starting the War On Drugs in 1970. This same phrase was later used by Reagan and H.W. Bush when they planted their positions of wanting to wield state violence against countercultures that arose. The '90s was full of change as Gen-X started to become adults and formed their own powerful countercultures, and the title of the show was an emotional appeal to conservative older people who hated that change and wanted the state to shape society instead of the other way around.


Law and Order is interesting as the early episodes were way more nuanced and gritty. It evolved into something different over the years.

They went from exposition of “tv reality” to making a weird case that both cops and prosecutors must cut corners and push the envelope. The weird part is they gloss over the futility. But as you said, the old people get the message that we need to do more.


I will offer an alternative POV: if your big brilliant plan is, sue the elected institutions over administrative decisions, don’t go to law school. It would be a colossal waste of your time. You will lose, even if you “win.”

You are advocating that talented people go for Willits as a blueprint of “civil service,” which is a terrible idea. It’s the worst idea.

If you have a strong opinion about administrative decisions, get elected, or work for someone who wins elections.

Or make a better technology. Talented people should be working on Project Longfellow for everything. Not, and I can’t believe I have to say this, becoming lawyers.

And by the way, Flock is installed in cities run by Democrats and Republicans alike, which should inform you that, this guy is indicting civil servants, not advocating for their elevation to some valued priesthood protecting civil rights.


https://www.opensecrets.org/federal-lobbying/clients/lobbyis...

Do you mean these fine former civil servants simply making administrative decisions who are now Flock lobbyists, or do you mean current civil servants who are future Flock lobbyists?

You more likely are getting paid something to not understand things if you, in 2025, believe the "bipartisan consensus" with massive donor class overlap is credible to anyone without an emotional need to rationalize.


Understanding crypto from this type of international context focused on these sorts of issues is where it indisputably makes much sense and is seeing indisputable adoption. Low and slow but end of the day to a very large and growing problem, bitcoin+ adoption or a mass civics readjustment in the US are the solutions. Which is more likely?

So it’s an inefficient tech with a mess of problems and uneven adoption but if you want to send $1-$1mm anywhere in the globe you can. That’s very powerful tech and the implications are about as important as anything else from cryptography hitting public adoption. And all of those have been consequential.. see 30 year fight about e2ee.


If you work in cybersecurity, I’d table many views in this thread and just understand it’s the place to be to cut your teeth in fairly hard security problems and make money along the way. If 1980’s security culture seemed cool with a new BoF everyday and Bill Gates himself calling you a bad word for doing it, and toss in advanced threat actors, a sec career in crypto isn’t too far off of that. Of course company by company variations apply and the above could include explaining EDR to small teams with absurds amounts of funds tied to a private key in a .txt.

That said, much of the feedback in this thread applies to working in it imo, as the other side of keeping these companies and their treasuries not hacked and capitalized is it exposes you to a lot.

That said, I’ve done big tech too, and the nonsense in crypto just has a couple less rungs of management insulation than the rest of tech. The rest of tech lives with the consequences of asinine decisions over 4-5x quarters and in crypto you live with it month to month. Pick your poison on preferred version of nonsensical tech instability.

There’s a twitter comment that covers what I’ve come to think - natural state of crypto is just a more direct instantiation of what’s going on everywhere else, crypto just doesn’t hide it (sort of). Hard not to believe that with tech selling “trade in your IRA!” as if that’s not offering a beer to my 20 yr sober Uncle Bob, in terms of products that are cancerous for “the people.” So I see nothing in crypto that’s not reflected everywhere in tech and civics right now.

The crypto tech or integrations to pay attention to - btc, atomic swaps cross-chain, trading firms, whatever finserv is testing for payment and settlement infra. All of these have deep building, are functional and funded. Wouldn’t bet against it over a career.


Universally adopted in part by very well known strong arm business practices from Big Ag vs farmers. This is a bad faith framing imo. Source - live in ag country


Haha very important disclaimer there, because reading your post sounds a lot like a person who works for big ag.

The other reason these laws exist is a long history by Big Ag (Monsanto, Cargill) doing the following, and has been done in the states for a while:

1) gmo/patented seeds in field on the left, community non-big ag seeds on the right field.

2) Cross-pollination occurs because we’re talking crops. Variations on this.

3) Monsanto sues Farmer John and Jane into the ground next season for stealing tech via the crops he’s growing.

Add in a little bit of fear (encryption backdoors for the children, laws to prevent dangerous counterfeit seeds!), and you have monopoly on farming run by big corps.

Also, US corps have a long history of POC’ing underhanded approaches in Africa.

What could be going on here!?

Edit - Man, rereading, “forced to plant [dangerous] saved seeds,” guess it’s Big Ag + tech startups now pushing this. Maybe… those farmers just want to control their “IP” (saved seeds) so they don’t have to buy them from a cartel of seed providers? This is such a well known problem in the states, is this marketing really working in Africa?

Final edit on the soapbox - other reason why this matters is genetic diversity. Crop blight is a thing. There is no way the natural “herd immunity” of a basket of seed variants in a community is outstripped in effectiveness by a growing monoculture of owned hybrid seeds that stay in front of the blights each season. Coffee rust already jumped the Atlantic from Africa to SA. Often feels like I’ve read this sci-fi novel already (there is a good one - Windup Girl).


From what I've read, the articles about Monsanto suing innocent farmers is misleading.



“ The usual Monsanto claim involves patent infringement by intentionally replanting patented seed”

https://en.wikipedia.org/wiki/Monsanto_legal_cases

Edit - Can’t reply again looks like but to the response below, yes many view this approach as effectively leading to enforcing what you state. Which is why it is so horribly underhanded to me, and seeing supporting narratives in hackernews was striking.


Doesn't this mean that farmers will no longer be able to reuse their own seeds then, if a neighbor has GMO seeds?


No, it doesn't. From their "commitment" [1] which was affirmed by the courts as binding in a 2010s court case (Organic Seed Growers & Trade Ass'n v. Monsanto):

> We do not exercise our patent rights where trace amounts of our patented seeds or traits are present in a farmer’s fields as a result of inadvertent means.

[1] https://web.archive.org/web/20101023123618/http://www.monsan...


> where trace amounts of our patented seeds or traits are present in a farmer’s fields as a result of inadvertent means.

That sounds like a very hollow commitment to me. Who defines what "trace" is. Monsanto?

And what is the normal cross pollination rate from doing nothing. 1% 5%? It sounds like it just means we won't sue you the first year, we'll wait until the second year then sue you.

The practice needs to be banned. It's Monsanto seeds that are spreading their genetics in the wind. If they don't want that, then make crops that can't. If they're unable to, then tough.

Saying nobody within pollination range can grow their own crops anymore once someone nearby purchases Monsanto seeds is absurd.

That's all aside from the fact that patenting things that reproduce still is somewhat of a weak concept to begin with.

Putting an absurd tech spin on it. If you made a robot/machinr that could replicate itself sure patent it. If you made a robot that sent out radio waves and every machine within receiving distance could/would suddenly replicate, you can't sue those owners for "stealing your technology".


The proof is in the pudding. To my knowledge Monsanto has never sued anyone over inadvertent cross contamination regardless of the percentages. The cases where they have sued were farmers who explicitly went out and got Roundup resistant seeds to use with Roundup from unlicensed vendors or in violation of a license they themselves signed with Monsanto.

It has never made any sense for them to enforce it against cross contamination because farmers don't want the seeds if they're not already nuking everything with glyphosate. They either buy F1 seeds every year for the extra yield hybrid vigor gives them or they save seed that's somewhat optimized for their growing conditions.

> Saying nobody within pollination range can grow their own crops anymore once someone nearby purchases Monsanto seeds is absurd.

This is a fantasy you have concocted, not the reality.


Meaning it didn’t happen, or the farmers aren’t as innocent as the word innocent legally implies?

Comment could be considered misleading…


To use the example provided by the anti-Monsanto upthread poster as a example of Monsanto being underhanded:

https://en.wikipedia.org/wiki/Bowman_v._Monsanto_Co

1. Bowman buys Monsanto soybeans as seeds agreeing to not replant the soybean harvest.

2. Bowman sells the soybean harvest to a food wholesaler who sells to retailers who sells to consumers for consumption.

3. Bowman buys soybeans back from that same food wholesaler (who normally only sells for consumption) intending to replant those food soybeans (which is abnormal).

4. Bowman then tests the seeds he bought to verify which ones were the ones he sold which had the Monsanto modifications (or his neighbors who were also using Monsanto seeds with the same contract) and which he was not allowed to replant as per the contract in 1.

5. Bowman then only replants the ones with the modifications and uses Roundup in those fields.

6. Bowman then repeatedly saves and replants seeds from that crop to amplify their quantity of modified crop and purchases more seeds from the food wholesaler.

It was about as premeditated and intentional a contract violation as you can get.


What's the example on the other end of the spectrum?


Hello, are you still there?


There have been court cases, but in most cases, they weren't simply "innocent farmer happened to grow IP-infringing crop simply due to being near a farm that used GMO crops and cross-pollinating by accident".


How does that suing pass muster is any court of law?


Does it need to? Unfortunately, a threat of a lawsuit by a large company is weapon enough to make people buckle.


Read "confessions of an economic hitman", you'll get the gist


More expensive lawyers.


Good question


Not a great post, I’d not follow it if interested in leading teams long term.

A Self-admitted self taught manager learns the good parts about servant leadership via self-learning (nice!) but figures that is all there is instead of - “this is interesting, this seems to work but have gaps, what is there to this?”

If the author did that, they’d discover a massive body of knowledge to include the specific problem they point out - you solve problems for your team, how do they start to solve their own problems?

Servant leadership works if paired with the following, tuned to the capabilities and maturities of the specific employee:

- servant leadership: resource your team, umbrella your team, let the smart people you hired do smart things, or turn so so employees into great ones by resourcing them to learn, getting them mentorship, and “sun is strong than cold wind” sort of thinking.

- Left/right limits and target outcome: consistently inform your team their duty, in exchange for all the above manager work that’s way past the least-effort bar, is to get comfortable solving problems within the bounds of what the solution does and does not need to look like. Force this issue always, and they start solving their own problems at growing speed, and you have a QA check as a manager via documenting those boundaries per project etc

- train your replacement: part serving your team is reaching there’s probably another sociopath on it who wants to lead teams, wants raw power, and so on. Enable that! Teach them how to lead teams in the above fashion. They’ll realize it works. You’ll train someone who can take over the remaining problem solving. This won’t hurt your own job either.

Put it all together you’ll get very loyal productive teams of employees who’ll respect you outside of work in your industry where it matters for networking purposes, and you can live with yourself after the laptop closes as you know you’re treating your fellow man/woman the right way while surving in crazy corporate environments.

In short, bad advice in that article. There’s a whole corpus to leadership beyond what the author figured out in the side and describes here ha.

Edit - ironically the author then argues for arguably similar as the above, but claims it’s something else of their own invention. Engineers should really grok how there are existing bodies of very useful knowledge for all the things that seem easily dismissible as gaps or weak points from tho social sciences. It’d save them a lot of time.


Servant leadership works just fine in business (as in a competitive non-church environment) as long you’re aware you you’re serving and who you’re working peer to peer with/against/whatever.

Another term for it somewhat is being a “players coach.”

End state is you will build loyal as heck teams with it, and if you want to take a very cynical business mindset, it produces with the least pain and suffering three very impotent outcomes - your team will produce output, they won’t hate you along the way, and your team will write you (well earned) manager perf reviews. A manager who has a loyal as heck team up and down the stack builds unique odds of corporate survival.

All it takes is a little EQ.


Assuming a 101 security program past the quality bar, there are a number of reason why this can still happen at companies.

Summarized as - security is about risk acceptance, not removal. There’s massive business pressure to risk accept AI. Risk acceptance usually means some sort of supplemental control that’s not the ideal but manages. There are very little of these with AI tools however - small vendors, they’re not really service accounts but IMO best way to monitor them probably is that, integrations are easy, eng companies hate devs losing admin of some kind but if you have that random AI on endpoints becomes very likely.

I’m ignoring a lot of nuance but solid sec program blown open by LLM vendors is going to be common, let alone bad sec programs. Many sec teams I think are just waiting for the other shoe to drop for some evidentiary support while managing heavy pressure to go full bore AI integration until then.


You missed risk creation vs reward creation.

And then folks can gasp and faint like goats and pretend they didn’t know.

It reminds me of the time I met an IT manager who dint have an IT background. Outsourced hilarity ensued through sales people who were also non-technical.


What am I missing? Risk acceptance is what you’re referring to - risk creation and reward creation.

Sec lead might have a pretty darn clear idea of an out of whack creation of risk v reward. CEO disagrees. Risk accept and move on.

When you’re technical and eventually realize there’s a business to survive behind the tech skills, this is the stuff you learn how to do.

People “will know” as you say because it’s all documented and professionally escalated.


Manipulating this for creative accounting seems to be the root of Michael Burry’s argument, although I’m not fluent enough in his figures to map here. But, commenting that it interesting to see IBM argue a similar case (somewhat), or comments ITT hitting the same known facts, in light of Nvidia’s counterpoints to him.


Burry just did his first interview for many years https://youtu.be/nsE13fvjz18?t=265

with Michael Lewis, about 30 mins long. Highlights - he thinks we are near the top, his puts are for two years time. If you go long he suggests healthcare stocks. He's been long gold some years, thinks bitcoin is dumb. Thinks this is dotcom bubble #2 except instead of pro investors it's mostly index funds this time. Most recent headlines about him have been bad reporting.


Why is that the problem for above the legal speed limit drivers?

A slow fleet of Waymo’s will impact your average 5-10 over same as your 20 over, and that’ll collectively impact traffic.

The implicit assumption you and many other in tech share is humans must adapt to the tech protocol, and not the other way around.

After 20 years of growing negative externalities from this general approach, which I see baked into your comment - are we seriously about to let this occur all over again with a new version of tech?

Fool me once, fool me twice… I think we’re at fool me 10 times and do it again in terms of civic trust of tech in its spaces.


As long as they don't sit in the passing lane, I don't see how a fleet of vehicles moving at a consistent speed and not driving erratically will have any more negative impact on traffic than a human driver. Like other's have mentioned, it might actually improve traffic as you don't have people speeding up to get close to a person and then quickly slowing down, causing "phantom" traffic jams.

Also, if the Waymos are following the laws, and that causes problems... then maybe those laws should be changed? Especially if most drivers already don't follow the laws.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: