Hilarious that the entire TSA system is vulnerable to the most basic web programming error that you generally learn to avoid 10 minutes into reading about web programming- and that every decent quality web framework automatically prevents.
It is really telling that they try to cover up and deny instead of fix it, but not surprising. That is a natural consequence of authoritarian thinking, which is the entire premise and culture of the TSA. Any institution that covers up and ignores existential risks instead of confronting them head on will eventually implode by consequences of its own negligence- which hopefully will happen to the TSA.
> Hilarious that the entire TSA system is vulnerable to the most basic web programming error that you generally learn to avoid 10 minutes
The article mentions that FlyCASS seems to be run by one person. This isn't a matter of technical chops, this is a matter of someone who is good at navigating bureaucracy convincing the powers that be that they should have a special hook into the system.
What should really be investigated is who on the government side approved and vetted the initial FlyCASS proposal and subsequent development? And why, as something with a special hook into airline security infrastructure, was it never security audited?
Based on the language on their site about requiring an existing CASS subscription, my guess is there was no approval at all. It appears this person has knowledge of the CASS/KCM systems and APIs, and built a web interface for them that uses the airline's credentials to access the central system. My speculation is that ARINC doesn't restrict access by network/IP, so they wouldn't directly know this tool even exists.
Some quick googling shows the FlyCASS author used to work for a small airline, so this may piggyback off of his prior experience working with these systems for that job. He just turned it into a separate product and started selling it.
The biggest failure here is with ARINC for not properly securing such a critical system for flight safety.
This right here people need to pay attention to gut the following reason:
One person can make a lot of impact
The most common thing I hear people say with respect to their jobs is: “I’m just one person, I can’t actually do anything to make things better/worse…”
But it’s just wrong and there’s thousands of examples of exactly that over and over and over
In this case, if this is true, it’s both amazing that:
One person, or a small number of people, could build something into the critical path as a sidecar and have it work for a long time and
And second, the consequences of “hero” systems that are not architecturally sound, prove that observability has to cover all possible couplings
Oh, everyone knows that one single person can make things a lot worse. That's all that's happening here. That doesn't say anything about how much one single person can make things better. In the former case, your powers are amplified by the incompetence of everyone else involved; in the latter case, they are diminished.
Given the nature of these systems, this 1 person likely made the day to day lives of a lot of people better, providing an (arguably) snappier web interface to existing systems.
Granted, they've probably made someone's day a lot worse with this discovery, but..
They made the day of a lot of people, making the KCM program available to crewmembers of thousands of smaller airlines.
I take issue with the way that disclosure was implemented here. The responsible thing to do would be to contact the site first, no matter if 1 or 1000 employees.
Then you move forward with FAA, DHS, Etc. Assume that the site will act in good faith and recommend that they take down access until the problem is remedied, then back that up with disclosures and calls for auditing and verification to partner agencies.
Contacting the site first is the only honorable thing to do. It doesn’t mean you wait to contact other agencies, but contacting the site means the quickest halt to the vulnerability and least interruption to service. Disclosing to partner agencies is still required, of course, but hopefully they will be looking at a patched site and talking about how they can implement improvements in auditing the systems connected to the KCM service.
By disclosing in the right order you improve the possibility that organisations will focus on their appropriate role. The site fixes their egregious error and realises that their business depends on being secure, the TSA KCM manager realises that they need to vet access, and the FAA realises that the TSA needs to be supervised in the way that they interact with aircrew access.
Otherwise, everyone might just focus on the technical problem, which will be solved in a few hours or days and then go back to business as usual.
The vulnerability here actually is much, much larger thanSQL injection. It is an inherent vulnerability in the organisational structure and oversight, and this will only be addressed in a bureaucracy if the actual problem is made clear at each organisational level and no red herring excuses that allow finger pointing are provided.
Not to mention it’s a dick move to leave the technical people out of the loop completely in the process of disclosure, even if the disclosure is primarily of a systemic organisational failure.
I’m sure the individual responsible was much more alarmed to get a call from DHS than they would have been to get a call from security researchers, so the given rationale is clearly fictional.
Assume people will act in good faith, but don’t give them room not to. Trust but verify. When dealing with companies and orgs this is the way. When dealing with randos on the internet, not so much.
When things go well nobody notices. I’ve certainly headed off and found/fixed a lot of bad decisions in my career, some of my own included. There was a lot of impact there, and it’s good when it’s invisible!
Good observation! This person is obviously meeting a need, and probably doing pretty well for themselves, SQL injection and all.
> The most common thing I hear people say with respect to their jobs is: “I’m just one person, I can’t actually do anything to make things better/worse…”
Yup. This is something on the order of a large-scale blackpill meme lately. Comment sections are usually rife with low-agency thinking. Which is quite something in tech, given that devs are the means of production for tech. True, tech as of late seems to be veering into more capital-heavy ventures (AI), probably to head off existential risk from the fact that a few skilled individuals can still really make a dent.
Real life is all of us and all of us have an enormous impact in some way. Especially if we try and apply ourselves. Not all the time, not for everything, but if we try enough things enough times and learn and grow, then people usually come out with impressive results of some sorts after a while.
People overestimate what can be done in the short term, and underestimate what can be done in the long term.
In a lottery the ratio is against you. In real life the ratio is almost guaranteed in your favor in some respect in the long term for anyone who tries.
Beware of black and white thinking here. There's no "winning," just small wins building momentum towards whatever change you want to effect. Luck is always a factor (and don't believe anyone who says otherwise), but don't discount your ability to work smarter and harder.
Why is it critical for flight safety? It is critical for security theatre we have to endure at airports because some people have heightened neuroticism.
Be that as it may, of course the error needs correction. If it really is a one man show for tool like this, it isn't even surprising that there are shortcuts.
Because your luggage is not checked at all. I'm sure that a state level actor could circumvent TSA but an amateur could not, and they pose a huge threat too, see the recent bombing attempt at the Tailor Swift concert or the Trump assassination attempt
Allowing literally anyone to get into any airport and into any locked cockpit without any screening is critical to flight safety. If you can’t immediately see why I’m not sure what to tell you.
Someting I’ve been thinking about, esp since that crowdstrike debacle. Why do major distributors of infrastructure (msft in case of crowdstrike, DHS/TSA here) not require that vendors with privileged software access have passed some sort of software distribution/security audit? If FlyCASS had been required to undergo basic security testing, this (specific) issue would not exist
They often do. The value of those kinds of blanket security audits is questionable, however.
(This is one of the reasons I'm generally pro-OSS for digital infrastructure: security quickly becomes a compliance game at the scale of government, meaning that it's more about diligently completing checklists and demonstrating that diligence than about critically evaluating a component's security. OSS doesn't make software secure, but it does make it easier for the interested public to catch things before they become crises.)
Also, any certificate bears a certificator company name. We can always say "company A was hacked despite having its security certified by company B". So that company B at least share some blame.
In practice, most commercial attestations/certifications contain enough weasel language that the certifier isn't responsible for anything missed (i.e. reasonable effort only).
But yes, there are many standards for this (e.g. SOC Type 2 reports).
In defense of their utility, the good ones tend to focus on (a) whether a control/policy for a sensitive operation exists at all in the product/company & (b) whether those controls implemented are effectively adhered to during an audited period.
That’s not really how they work. The auditor attests that they were provided with evidence that the systems/business units audited were compliant at the time of auditing. That doesn’t mean that the business didn’t intentionally fake the evidence, or that the business is compliant at any time subsequent to the assessment.
An auditor would certainly have some consequences if they were exposed for auditing negligently.
This is how the PCI SSC manages to claim that no compliant merchant/service provider has ever been breached, because they assume being breached means that the breached party was non-compliant at the time of the breach. Which is probably a technically true statement, but is a bit misleading about what they’re actually claiming that means.
“Worthless” is quite a strong claim. There isn’t much work I’ve encountered that’s truly “worthless”, even though bad work can make me quite upset. Anyways, that’s why I would often caveat.
Mandatory audits by accredited auditors in order to participate in a market, inevitably create a market for accredited auditors that don't uncover too much but ensure all checkboxes are ticked. Much of the security industry is actually selling CYA and not actual security. The same dynamic at play means buyiong a home/boat/car you should get your own inspector, not blindly trust the seller's.
I'll say they are worthless because most of time they are dragging time away from things that could improve security. For example, $LastJob we spent a ton of time on SOC2 compliance and despite having applications with known vulnerabilities, we got hacked and ended up all over the news. Maybe of instead of spending all the time getting SOC2 compliance finished, we could have worked at upgrading those apps.
Actually, I doubt they would have upgraded the apps and pocketed the profits instead but SOC2 is providing cover instead of real change.
In this particular case it was worthless. If you have known vulnerabilities and you deprioritize that work to waste time on soc2, and get hacked because of it… soc2 was worthless. Because the whole point is security assurance. When you get hacked you’ve proved the opposite of security assurance.
But also you gotta have the balls to stand up to the guy pushing soc2 and say. No. There are known vulnerabilities. We are patching those first then we are doing soc2. The way I frame it is “we know we have critical vulnerabilities, we don’t need to go hunting for more till we fix them. Once we fix them we go looking for other ways to improve security posture”
And if the ceo still insists (big client requires it so we’re doing soc2 simultaneously) you say fine, then hire a security consultant so we can go twice as fast. And if he refuses you quit because fuck that place.
Because it's better than nothing when independent organizations are reviewing systems or other organizations. It's like saying that penetration tests are useless because you cannot prove security with testing.
Even if these govt. security audits are checkboxes, dont they require some nominal pentesting and black box testing, which test for things like SQL injection?
It may not apply to this specific incident, but pen-testing only ensures you meet a minimum standard at a specific point in time.
I almost feel I could write novels (if only I had time and could adequately structure my thoughts!) on this and adjacent topics but the simple fact is that the SDLC in a lot of enterprises/organizations is fundamentally broken, unfortunately a huge portion of what breaks it tends to occur long before a developer even starts bashing out some code.
In the case of msft/crowdstrike isn't this exactly the opposite of what HN rallies against? The users installed crowdstrike on their own machines. Why should microsoft be the arbiter of what a user can do to their own system?
They automatically occupy that position because in practice no user of a microsoft system can audit the entire "supply chain" of that system, unlike one built from open-source components. Any "control" someone has over "their own" system is ultimately incomplete when there is a company that owns and controls the operating system itself and has the sole power to both fix and inspect it
Money. Eventually the lobbyists would make it so cumbersome to get the certification that only the defense industry darlings would be able to do anything. Look at Boeing Starliner for an example of how they run a “budget”.
They do. But market forces have pushed the standards down. Once upon a time a "pen test team" was a bunch of security ninjas that showed up at your office and did magic things to point out security flaws you didn't know were even a thing. Now it is a online service done remotely by a machine running a script looking for known issues.
Unfortunately we're in kind of the worst of all possible worlds here too. Not only do we want to "automate" these kinds of tests, but governments have bought into the "security through obscurity" arguments of tech giants, so the degree to which these automations can even be meaningfully improved is gated in practice by whoever owns the tech itself approving of some auditor (whether automated or human) even looking at it. The author of this article takes the serious risk of retaliation by even looking into this
Part of the reason why Crowdstrike have access, why MS wasn't allowed to shut them out with Vista was a regulatory decision, one where they argued that somebody needs to do the job of keeping Windows secure in a way that biased Microsoft can't.
So, I guess you could have some sort of escrow third party that isn't Crowdstrike or MS to do this "audit"?
MS could have provided security hooks similar to BPF in Linux, and similar mechanisms with Apple, rather than having Crowdstrike run arbitrary buggy code at the highest privilege level.
They could have, however the timeline the regulators gave Microsoft to comply was incompatible with the amount of work required to build such system. With a legal deadline hanging over their heads Microsoft chose to hand over the keys to their existing tools.
^ This statement cannot be accepted without proof. It sounds outlandish and weird. Which regulator? Under what authority. Also Microsoft doesn’t listen to ANYBODY.
I've seen this stated before, but I haven't been able to find reliable data on when regulators required Microsoft to provide the access that they provided, or whether there's been time to provide a more secure approach. Do you know?
Replied in another comment, but I’m aware of the regulation that made msft give access. To my knowledge though, there’s nothing in the regulation that stops them from saying “you have to pass xyz (reasonable) tests before we allow you to distribute kernel level software to millions of people”
Oh they usually do require some kind of proof of security certification. However the checkbox audits to get those certs and the kinds of solutions employed to allow them to check off the boxes are the real problem.
Sigh. The company is a different problem than the product. Sally in accounting who has pii on her desk is a totally different problem than that the team that wrote insecure code 15 years ago.
Authentication should not need to be re-implemented by every single organization. We should have official auth servers so that FlyCASS doesn't need to worry about identity management and can instead just hand that off to id.texas.gov (or whatever state they operate from) the same way most single-use tool websites use Google's login.
Authentication and authorization, and especially on the web, is one of those things that has never been implemented well. I hate every single piece of software, every standard, every library, every approach I have come into contact with from this domain. I am so glad I have nothing to do with this field anymore. It makes me angry even thinking about it.
I agree with that sentiment, and I have tried to contribute in the past, but then again, you have to choose your battles. Making the kind of impact on auth that means I, or anyone else, will not have to deal with rubbish systems in the future is a big task.
It is one thing to write the needed software, it is a much bigger task to convince enough companies that they need a different approach to this problem.
However, what I can offer is that if someone has the backing to actually make a difference in this market, I'll volunteer 50 hours to act as a reviewer and test developer. But that is if your project is backed by someone I believe can make a difference.
This exists in some European countries, in Hungary for example you have an identity service (KAU) which authenticates you and operates as an SSO provider across a number of different government properties.
This exists in some European countries, in Hungary for example you have an identity service (KAU) which authenticates you and operates as an SSO provider across a number of different government properties.
FWIW, as a regular user of login.gov, from the outside, it looks like a well-designed system. I am able to add strong forms of 2FA (e.g., security keys or biometric authenticators), it requires strong passwords, etc. It also has decent developer documentation, has a support process, and comes with a vulnerability disclosure form baked into the main website. However, I have not used their API, nor have I seen any of the code (although I wonder if a FOIA request would actually compel them to give it to you).
The first bullet point on the /partners page of login.gov (regarding who should use it) says:
> You are part of a federal agency or a state, local, or territory government
I'm talking about a more generic service that any random industry system or individual can use. The way many websites use Google's OAuth without using really using Google's APIs. Things that just want someone else (Google) to handle asking for and authenticating a name/password.
Not 100% sure how I feel about random companies being able to definitively identify me. I’m sure we’re drifting in that direction anyway, but it feels like it would negatively impact privacy online.
It also is not necessarily your actual ID. As far as the individual website needs to know, it could just be a random string of numbers and letters. As long as it's the same string each time they ask the authentication authority to confirm you.
Americans as a whole are so allergic to government doing anything that we can't even get a national ID system
nor a centralized database of gun sales or ownership.
The bogeyman of evil Big Government, privacy, and censorship gets invoked.
It's fine if the Free Market does it, so Google, Facebook, Amazon, Twitter, Microsoft, et al get a free pass.
Topic drift, but no tools should use google login. Doing that means handing over to google the authority to decide who can and can't use your tool. And we all know google support is nonexistent and unreachable, so once it fails it's forever.
If you market a tool, you'd really want to own the decision on who you can sell it to.
For a government organization though, I'd agree it makes sense to use a government-run login service. (government run, not outsourced so some for-profit third party!)
Trusting Google's OAuth not to vanish overnight is less stressful than managing your own username/password database.
And that's pretty much my point. 2FA? Password Resets? Account Activation? Updating Email Address? No thanks. I would rather not have to deal with any of that. I literally just need a unique identifier to associate with your data and preferences.
Sorry if I wasn't clear. It is not that google will remove the service overnight (although they are infamous for canceling things, but not that bad). The problem is google will lock out users randomly for no reason and no recourse.
If that user was using google login to access your service/tool, you lost that user and there is nothing you can do. You really don't want to gate the access to your product via an unreachable unresponsive third party like google.
Many well-established web frameworks have plugins or components to handle user management out of the box, with sane defaults. Nobody should have to roll them by themselves with each hobby project. You're probably using a similar plugin to integrate with Google anyway.
Ah, but there are third-party services that provide identity verification, such as id.me. And now that there are for-profit entities involved in a government service, you will never be able to convince the government to implement their own solution. It's telling that id.me is headquartered in McLean, Virginia; gotta be in the DC metro area so your lobbyists have easy access to Congress.
It's also not a government web site. It's a private company who, for some reason, my own government outsources identity verification to. Meanwhile, the authorization system the US government has built (login.gov) is deemed "insecure" by the IRS and Social Security for some inexplicable reason. (But it's fine for Trusted Traveler Programs.)
It's the company providing the service that the government could provide on its own, but that service is being provided by a private company through a lucrative contract agreement.
You're aware that there's a registry per country, no? And that that each country can choose to set aside a subdomain for all government services?
Yes, it's unfair that the US gets naked .gov - but that doesn't preclude the rest of the world from doing the right thing, and it certainly doesn't excuse the US government doing the stupid thing.
> This isn't a matter of technical chops, this is a matter of someone who is good at navigating bureaucracy convincing the powers that be that they should have a special hook into the system.
I would love to know how one can get what I'd imagine is at least a 6 figures contract with the government? How does this work?
I imagine the author of FlyCASS must be making a good amount of money off their product.
> The article mentions that FlyCASS seems to be run by one person.
I wonder if they just subcontract everything? One popular hack of the preferences they give to veterans and minorities in government procurement is to have essentially one person fronts that get maximum preference and which subcontract everything to a real company at a markup.
We know that backdoors can be intentional for use by 3-letter agencies. And there is plausible deniability of the bureaucracy when they can pass blame onto a single individual.
Or it's beuracracy being beuracracy. The TSA is a lot of security theater anyways.
The US (and almost every government) has reliable ways to covertly move a person that don't involve putting SQLi in their own codebases.
The classic way to covertly move a person is to give them a new passport to travel under, and have them move around like every other schlub on the planet. Competent intelligence services make sure that this isn't easy to detect by making the fake passport's identifier indistinguishable from real ones. Russia has prominently failed to do this several times[1][2].
Having done software development with other federal agencies, they probably outsourced maintenance of critical national security mandates to Deloitte who has a team with managers in India running everything with a completely counterproductive culture of hubris solely to make the two managers look good, and anybody that questions that gets terminated in a week
Authoritarians don't like being challenged like this and it tends to enrage them. Its not unheard of for them to arrest/imprison well meaning security researchers who rightfully point out their own failings.
That's a problem with authoritarian organisations/regimes in general. They value loyalty over competence and you end up with people being in positions they shouldn't be in.
I'm not suggesting this is what they have done here, but this is exactly what authoritarian governments do. Straight from the pneumatic into the furnace.
> Hilarious that the entire TSA system is vulnerable to the most basic web programming error
Because it's a scam and the system is a grift.
I'm a pilot and own a private aircraft. Landing at any airport, even my home airport which is restricted by TSA is legal without any special requirement or background check. In fact, I have heard horror stories where TSA wouldn't let a pilot retrieve their aircraft for some bullshit administrative reason or another, so they enlisted a friend with a helicopter to drop them into the secure area to fly it out. Perfectly legal. The fact that the system can be brought down with a SQL attack is the least of it.
It sure would be nice if someday we get to have some TSA-free airlines and TSA-free flights for people that don’t want to get sprayed by ionizing radiation before every flight but don’t fly often enough to warrant a yearly membership fee. It would be interesting to see what people choose if a choice is available.
We haven’t had a large commercial plane go down in over 10 years since 9/11. Everyone that comes to the USA has been fully screened, vetted, and background checked. We’re all very safe. Mayorkis at the DHS has made sure there aren’t any terrorists in our homeland because the government only exists to protect us from danger and make our lives better.
I find it amusing (actually more tragic than amusing) that the same politicians who tell us all day that corporations can't be trusted because they are run by people with character flaws (greed, lying, laziness, etc.); will turn around and tell us that handing more power and influence over to a government agency is a good idea.
They make it sound like the job pool between the public and private sector is completely separate when many people move back and forth between the two.
Take away the accountability that often governs the private sector and that seems to be the recipe for situations like this.
What mythical private sector accountability are we talking about? A government agency didn’t build the software, it was a one man, private sector company. Maybe the moral is not outsourcing every last thing in existence?
Not always, but often the marketplace will punish you if you screw up royally as a private company or employee. It seems that nearly every government snafu results in a promotion.
In practice, these systems get stronger rather than imploding. Any failure becomes a justification for more power that they can use to "prevent this from ever happening again". A system that ran smoothly and never had issues wouldn't be able to grow like this (and might even shrink as people start to take it for granted).
True but even though I’ve always been careful to escape sql, I’ve also made an oversight once by writing a custom SQL filter and missing to escape it. The code reviews also missed it (we were so used to the framework solving it for us). Luckily a pen test found it and was only shortly in production.
It might have been an insanely old application that predates SQL injection being common knowledge (or required to be protected against) and has been forgotten about/poorly maintained.
There are oodles and oodles of apps like this powering our daily lives.
Looks to me like there's a reason this vulnerability exists ... for example, to help certain people have a simple way to avoid TSA searches and/or credential checks.
It is really telling that they try to cover up and deny instead of fix it, but not surprising. That is a natural consequence of authoritarian thinking, which is the entire premise and culture of the TSA. Any institution that covers up and ignores existential risks instead of confronting them head on will eventually implode by consequences of its own negligence- which hopefully will happen to the TSA.