Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Quality Software Costs Money – Heartbleed Was Free (acm.org)
94 points by dblohm7 on Aug 15, 2014 | hide | past | favorite | 70 comments


Simply throwing money at FOSS will not fix any security bugs. The money will be soaked up by those who are good at soaking up free money and making it disappear, that is all.

If you want to spend money wisely on FOSS, you do that by hiring someone to implement some specific change to a specific program: something that can be estimated, scheduled and tracked to completion.

"Find and fix security bugs in software component X" is not really a specific development task; it's a crapshoot. You cannot put a concrete amount of time and dollar figure on it. A team could burn through months of salary and not come up with anything. On a weekly status report, you could say "All week long, I looked painstakingly through such such directories and didn't find any security issues" even though you spent maybe ten minutes on it, and the rest of the time on HN. Even if someone else finds issues in the same code, nobody can prove that you didn't spend that time; you just overlooked those things. (For this reason, it's better to involve tools: if you cannot easily lie that you applied a certain code verification tool and found nothing, because someone else can run the same tool.)

This is probably why Hearbleed was not found earlier. Even though the software is packaged and "supported" by all kinds of vendors (for instance vendors of hardware who provide FOSS-based firmware and support it for their customers), they don't spend resources on auditing all the FOSS packages that they bundle and they don't do that because they know it is a big money hole. They implement things customers ask for, and respond to outside reports of issues (from customers or elsewhere).


Maybe you just enjoy hyperbole but while part of what you say is correct (finding security vulns in software is unavoidably a bit of a crapshoot) your conclusions are wrong.

Finding deep, serious vulns like this in software can currently only be done by human beings. Tools are better at being authoritative but can only find vulns of a given type. For example static analysis is a great fit for any vuln that boils down to a dataflow problem, user controlled source -> ... -> dangerous sink. XSS, sql injection, etc fit this model. Fuzzers are great at finding bugs in parsers (and there are a surprising amount of parsers in the world, 90% of which should never have been written). Instrumented dynamic analysis can do awesome work for memory issues. I explain all this to show there are areas where tools are fantastic for their area. But there are many areas for which tools cannot help at all, heartbleed was one of these areas.

The best security tools available were (presumably) run across openssl before and (certainly) with increased scrutiny after heartbleed. None of them found it. Simple limitations in static analysis lead me to believe they would never have found it on their own (most static analysis tools stop at 5 levels of indirection) Some background:

1. http://blog.trailofbits.com/2014/04/27/using-static-analysis... 2. http://security.coverity.com/blog/2014/Apr/on-detecting-hear... 3. http://www.grammatech.com/blog/finding-heartbleed-with-codes...

If you have immature projects sure run tools against it and some bugs will shake out. But if you want to find the next heartbleed a tool wont do it which is your mistaken conclusion.

The question then becomes how to cultivate and encourage more people to find vulns like this. Money seems like a good incentive for most, although Neel Mehta did it of his own volition. I dont know the answer to that question but things like googles project zero are exactly what I would try first.


What is my mistaken conclusion? Okay, so all known tools have been exhausted, so now you're down to people and their talent for finding bugs. What people should you pay? How do you know you're getting your money's worth out of those people? What if there really is nothing left to find: are you prepared to believe six months' worth of status reports which say "found nothing?"

My point wasn't that only tools should be used; I put that in as an aside (wrapped in glaring parentheses!). If I hadn't, someone would have pointed it out for me in a reply: "Hey you fool, of course you can track whether people are really bug hunting and being honest about their activity, if they are using tools whose results are reproducible."

Of course tools only find things that they are designed to find. My point was not at all that tools should be used because they will find the next Heartbleed, but rather that you have some hope of tracking the progress of a security team that is applying tools.

The topic of submission isn't about what is the best way to find security holes, but about spending money on it. My view is that spending money wisely requires some definition of a "return on investment" and tracking of concrete goals. This is hard to do with security research (once tools-based approaches have been exhausted).


Your acute mistaken conclusion> Simply throwing money at FOSS will not fix any security bugs.

I can't think of anything closer to "throwing money at FOSS" than something like the internet bug bounty. Google/Facebook/etc collected a pile of money and put it up for a bug bounty for software used by most of us on the internet. https://hackerone.com/ibb click through to the projects and look at all the bugs that have been rewarded. https://hackerone.com/internet and https://hackerone.com/sandbox are the coolest.

My interpretation of your general conclusion is: without quantification spending money/effort on security is not useful. I disagree with that because its the nature of the beast. Its useful to have people look through code and some weeks there will not be a lot of findings. Its absolutely okay for a status report to read "I tried this, thought think might work, investigated the way X works to ensure it doesn't do Y - 0 total findings".

What people to pay & how to know you are getting your moneys worth are not unsolvable problems. For example at the company I work with we hold yearly bake-offs giving different security consultants the same code to see what bugs they find, we then use the best 2 or 3. Thats an approximation sure, but it solves your what people to pay problem.

How to know if you are getting your moneys worth, this is harder and rubs against the essence of security/QA work. No one knows what lurks in randomCode.tar.gz. That is the whole point of the exercise. But apparently the world agrees its useful to have corporate application security teams to do some vetting of the code looking for vulns, more useful that nothing at least. More useful than tools? Well thats a weird comparison because you likely need security people (or engineers with a bit of security background at least) to run some tools. I think tools vs people is a different debate but I would bet on people even at an equal cost point.

I agree quantification of security research is hard, I disagree that because we can't quantify something it is not useful.


There are tools that let you do bounds checking statically, with varying degrees of headache. That would have precisely addressed heartbleed.


I disagree, security can be estimated, scheduled and tracked to completion. It's not magic, and while new attack vectors are still being discovered, Heartbleed is a very traditional vulnerability that static analysis should have caught - if someone were actually running the analysis.

In fact, we don't know how long it actually took to find Heartbleed once Codenomicon and Google started looking. It might have been a weekend project for someone new to Codenomicon's internal array of tools, who was just looking for a way to get up to speed at a new job. It's also possible that it's the result of months of painstaking work by Neel Mehta, 6 of which were possibly spent with the status report of "All week long, I looked painstakingly through such such directories and didn't find any security issues", though I'd bet he didn't spend it on HN.

For traditional feature development, a dishonest employee could take random code from the internet, rename functions and then check that into version control, and say they were working but instead they were surfing HN. (Stretching out the "Oh, there's some problem getting it to compile, I'm working on it." excuse for months is left as an exercise for the dishonest.)

"Find and fix security bugs in software component X" is about as specific as "add a spell checker in software component X" and can be treated just the same. You break down the task into smaller components until you can visualize the code needed and then you write it. "Audit software component X for security risk A" is something a competent security researcher can be tasked to do and get results for.

For instance, after a week of working on timing attacks against an OpenSSL key verification function, the developer assigned to it could say "Here are the numbers I got; the difference between how long it takes to validate this key, vs invalidate this key averages this much, so it's not vulnerable to key recovery because X, Y and Z. Additionally, here is the code harness I wrote to generate this data; I also modified the code so that was vulnerable to a timing attack, and here's my exploit code, which is why I know that the shipping code is not vulnerable to this class of attacks.

Now if you think you can fake your way through six-months of status reports with that much detail, if not more, and also fake out some code for test-harnesses to back it up, all the while surfing HN instead of working, I'm all ears, but outside of vendors selling snake-oil magic security blankets, security research is a hard field to be in, where sometimes there's a lot of work for no payoff. The flip side of that is, that while Neel Mehta isn't due to become a household name, he is now quite famous in certain circles, and won't have any trouble finding a job if he leaves Google.


> "Find and fix security bugs in software component X" is about as specific as "add a spell checker in software component X" and can be treated just the same. You break down the task into smaller components until you can visualize the code needed and then you write it. "Audit software component X for security risk A" is something a competent security researcher can be tasked to do and get results for.

Thank you - indeed that is all of Agile and project management in one paragraph. Very well put


How about funding improvements in static analysis toolsets and competitive/bounty penetration testing?


I wouldn't be allowed to build a skyscraper out of fatigued and rusty scrap iron. There is no reasonable amount of money and effort that could make that work, and just wanting to try would mark me as unqualified to do construction.

The heartbleed bug was a classic buffer overrun of the kind C has been causing for decades. When someone comes along saying "If you pay me, I'm going to write security software using unchecked pointer arithmetic" how do we as a community agree that the only response will be "That's a terrible idea and nobody will pay you for it"?


C has some of the best static analysis and debugging tools of any language, but all of that's worthless if you don't use them. Doubly so if you specifically handicap those tools and write your own obfuscated allocation scheme. Heartbleed was less an indictment against C and more an indictment against shitty code.


I don't see how one statement excludes the other. C is not the best tool to write secure software, but at the same time people have figured out how to use it securely despite its deficiencies. Heartbleed was a failure of both the tool and how it was used.


Heartbleed wasn't the fault of C... It would have been caught if OpenSSL didn't implement their own allocator by Valgrind and other tools. Seriously, have you ever used valgrind? not a difficult tool to use

What he's (erik) really saying is, if someone hires me to build a skyscraper, and I show up with fatigued and rusty scrap iron.. when the skyscraper fails, I should blame iron, and we should all have a talk about how terrible iron is, and why no one should be using iron. And whenever someone points out that I used rusty scrap, I'll just say, well titanium wouldnt have been rusty! Why arent we all using titanium?


When someone comes along saying "If you pay me, I'm going to write security software using unchecked pointer arithmetic" how do we as a community agree that the only response will be "That's a terrible idea and nobody will pay you for it"?

By having an alternative that is actually in practice better.

Confidence in maintainability is important. Performance is important. Compatibility (both as in language bindings, and as in variety of host systems) is important.


C security software is being massively funded and is very successfully the basis for trillions of business, eg. the linux & windows & mac kernel. But, according to #1 voted random psuedonymous internet commenter, this is obviously a terrible idea.


All of which have had massive security flaws, including hundreds or even thousands of zero-day remote exploits. And that's just counting the C memory corruption bugs, not actual algorithmic bugs.

If C requires me to only code in certain styles and only use certain tools to only get some increase in safety... Why not just use a language that builds safety in from the beginning?

That said, we're just going to continue soldiering on because of horrible programmer attitudes like yours. Everyone wants to believe that they are a unique little snowflake who wouldn't make those kinds of mistakes (oh wait, Coverty couldn't catch Heartbleed, oops) and choose the comfortable fiction of a Just World where people get what they deserve. After all, if people just drove better we wouldn't have so many accidents, so why bother with air bags and seat belts? Better hope everyone else drives better too or you'll be changing all your passwords and keys with the rest of us schlubs no matter how fancy your valgrind test suite is.


> oh wait, Coverty couldn't catch Heartbleed, oops

Not fair. Iirc the reason Coverity couldn't catch Heartbleed was because the OpenSSL people insisted on rolling their own inferior memory allocation. It would have worked if they just used the system malloc(). Coverity only works for normal programmers, it can't fully mitigate the ingenuity of complete fools.

Of course, my recollection could be wrong. In which case "someone is wrong on the Internet" will apply and someone will quickly set the record straight.


"If C requires me to only code in certain styles and only use certain tools to only get some increase in safety... Why not just use a language that builds safety in from the beginning?"

What language actually meets the need? GC'd languages haven't proven themselves useful for things like OSs, databases, or libraries that are deployed extremely widely in many environments.

So what safe, non-GC language are you advocating?


There are several available (and some in the making).

That they aren't more established for this kind of work is both a historical accident (bad moves with regards to licencing, etc) and a result of the cavalier, worse-is-better solutions the industry takes to everything, even when it's counter-productive (e.g with regards to security) that didn't let them catch on.

For some known ones, used at one point or another for those purposes, Ada, Pascal, Oberon, Modula, etc.

New contenders still in development include stuff like Rust, as a GC-optional newer contender. One wonders why it took until 2010+ to see another attempt at these kind of languages, as if we didn't have any issues with C/C++.


This is irrelevant (and false).

GC is orthogonal to memory safety.

(And all of those things have been successfully been done in GC'd languages.)


Not sure what I said was false?

I agree that GC and memory safety are orthogonal -- see rust. But there aren't a lot of options if you want safe and non-GC, which was my point.

And GC is just not an option sometimes. GC has been around forever but it just hasn't proven itself in a lot of domains. I don't think it's for lack of trying, I think it's because sometimes you want to manage the memory.


> GC'd languages haven't proven themselves useful for things like OSs,

What about Lisp Machines?


Lisp Machines provide ways to 'manually' manage memory for things like network stacks.

If a Lisp Machine is doing a global full GC (which it wants to avoid as much as possible), then it does not react in any useful way to inputs, the network, ...


It can be both successful and a terrible idea. Another example would be the single-user automobile becoming the dominant form of transit in the US.


It can be whatever you want to call it. Terrible in comparison to a fantasy land of candy and rainbows. OP is saying "as a community we should say no to funding C security software". It's about as insightful as saying anyone who uses a car is dumb.


Wow, that wasn't what I was saying at all. Read my post again - using (mostly) single-occupancy internal-combustion automobiles as the primary form of transit for the world's population is a terrible idea. When the car was first invented, I don't think anyone was thinking about 40,000 road deaths a year and CO2 pollution causing global climage change. When UNIX and linux kernels were written in C, nobody was thinking about string formatting or memory allocation exploits being used to steal millions of credit card numbers. That doesn't mean that drivers are dumb, or kernel developers. It does mean that we're living with the terrible consequences of various first-to-market success stories.


I'm not saying writing secure software in C is inherently impossible, but I'm not aware of it having ever been accomplished, and by now yet another attempt shows such poor judgment that funding ought not be provided. Certainly Linux has had remote root exploits, and so many local exploits that its multi-user model is no longer taken very seriously, and the other kernels have fared no better.


> I'm not saying writing secure software in C is inherently impossible, but I'm not aware of it having ever been accomplished

https://sel4.systems

Possibly one of the most trustworthy pieces of software there is.


The sneaky thing they did is write seL4 in Literate Haskell, and then translate it to C. That involves some serious understanding of both languages!

http://ssrg.nicta.com.au/projects/seL4/tech.pml


Man, I'd forgotten about that project. I wonder how much of the code could have been written differently and how much was effectively dictated by the theorem prover. Those are probably the very first people in the industry who deserve to be called engineers.


So what did they do? Write the code in C and then proved it with Isabelle/HOL? Extracted C code from Isabelle/HOL?


d.j.b software (qmail, nacl, dnscurve, others). It is rare, but it has been done.


Even djb software has your usual C-type security issues, though in less frequency than normal. qmail had an integer overflow that was theoretically exploitable, but couldn't be practically exploited in a normal configuration (djb admitted that was "pure luck"). And djbdns had an integer overflow that was exploitable (though in fairly unusual circumstances): http://article.gmane.org/gmane.network.djbdns/13864


Oh the irony. Those that you mention are also safe until proven otherwise.

I remember before Heartbleed was discovered everyone was raving about how secure OpenSSL is because it has been around for so long yadda yadda ... Heartbleed was discovered and everyone's suddenly acting "haha I knew it all along .. it was shit".


Everything is safe only until proven otherwise. Java will save you from (say) buffer overflows, but not SQL injections or permission errors (the most probable successful attack against Apache servers had always been writable executable directories, even though it is written in C and had its share of buffer overflows)

And if your circle thought OpenSSL was tight, you are hanging in the wrong circles for security. Heart less was exceptionally bad, but OpenSSL for a long time had a couple of security fixes a month on Ubuntu and Debian - that's enough to tell you that you need to follow advisories closely and not trust blindly. And yes, I have been saying that for at least 5 years, before heartbleed was introduced.


Conversely, the well-audited OpenBSD kernel has had next to none of these problems precisely because they pay attention to correctness, even with the "fatigued and rusty scrap iron" that is the C language.


I'm sure that OpenBSD is much more secure in general than Linux. I love it and use it as my own firewall.

But even OpenBSD has had its share of local root exploits. They've even had 2 remote root exploits, and so few only because most services are disabled by default.


True. However, they also as a group have done some pretty significant research into problems declared solved in order to make systems more secure. For example, the change a few years back to make malloc based on mmap instead of more traditional heap pools, so that if there is an overflow, the program's much more likely to segfault than to go gleefully trampling over data, etc.

Additionally, it's worth keeping in mind that the reason for the OpenBSD team forking OpenSSL wasn't Heartbleed, but rather the OpenSSL team's rather terribly coded malloc replacement that did things in there. All of the security research and practices the OpenBSD team typically insists upon couldn't do anything because OpenSSL insisted on running in its own leaky memory box.


> I'm not saying writing secure software in C is inherently impossible, but I'm not aware of it having ever been accomplished

Kinda proves my point.


Okay, then make a business case to migrate away from C. Take into account (1) Available talent pool (2) Access to high quality tools/libraries/documentation/ (3) Performance (4) Maintainability (5) Language interop.

Resources: time and money; are finite, as expected.

Nobody has found an alternative. Perhaps you have one in mind.


That's not the argument - a skyscraper made out of fatigued and rusty iron would fall over before it could open. However a skyscraper that was not upto spec, no fire escapes, perhaps cutting back on the sprinkler system, maybe not putting in carpets, now that is more likely.

We have market forces that prevent people selling clearly defective skyscrapers. But we need regulators and investigators paid for out of the public purse to ensure the defects that are not obvious are found.

What we lacked here is a form of certification that the code was good enough. Or perhaps, we just lacked a form of certification that enough (qualified) eyeballs were looking at the code.


Certified software already exists ("trusted nix" anyone?).

Though in practice it's arguable if it's actually any "better" than non-certified one. It also costs a lot more money to certify.

Some OSS software has indeed problems with certification. A real example: if you do a statistical analysis for a government or public administration, you need to use certified software. Many people use Stata in that field though would prefer R for a number of reasons. I have, personally, twice as much confidence in any result that R is spewing compared to Stata. But Stata is certified, while R is not.

To bring back to OpenSSL, you could readily start the certification process of a single version of OpenSSL if you really wanted, and most importantly, had money to. You know, for you critical business. Or you can just buy a certified SSL implementation in the first place and use that. Because if YOU're resposible for security, whose fault is when an exploit is found? The SSL developers, or you for not using certified software?

Because that's how stuff is already working in practice.

People in this thread seem to forget how free/oss software is made. Complaining about implementation details or "organization" politics about a group of people loosely collaborating is a bit hypocritical.

Start a "Trusted OpenSSL" startup, collect money, and start employing the core developers. Then you might still have money to buy a Coverity license which will definitely help in spotting problems like these.


I did not think I was complaining about the developers - more that it turns out (in hindsight) that half the internet trusted six overworked folks to build everything they depended on. I think that some form of warning or certification for OSS projects would surface things like "OpenSSL - used in 100 million sites, total spend 4 cents a week" would at least allow people to see where there donations to

Another perhaps related one would be a culture of paying for free software (as in trusted funding routes, foundations and an expectation that users of free software at certain levels and sizes should contribute back.

Not sure how it will work but raisin the facts and figures to prominence will encourage a solution.


I agree with you.

I was just trying to guard people against the false hope (and more often, lock-in due to money constraints) that certification provides.


So do you actually know any other language, apart from C and C++ that will actually be compatible with all the existing needs? That means, among other things, being really fast, not using garbage collectors and being linkable to the rest of the software written in C.

I know that Rust can be used once it matures, nobody knows when, but the question is for now?


What do you think a lot of those higher-level alternatives are written in?


Unfortunately for the time being software will have to be written by humans, that is, fallible entities. You make it sound as if there were some magical security programmers who are above fail and could be trusted to get everything right, all the time. I think that is a dangerous illusion.

Is there even a name for that fallacy? It is so common - for example the same things happens when people call for regulations in finance. Regulation by whom? They imagine some magical finance genius people who would get everything right.


We don't.

It's more than possible to write good software in C. It's more than possible to write bad software in any other language. Particularly where security is concerned C can give you control of operation timings and various other side-channels that others don't.

C has not caused a single buffer overrun. Lazy coders have.


> C has not caused a single buffer overrun. Lazy coders have.

That's like saying exposed high voltage lab equipment has not caused a single shock, careless lab techs bumping into it have.

When you design for safety, you assume mistakes will happen, and you design accordingly. Continuing with the voltage example, for a low voltage, one that will shock but not cause lingering pain, a simple barrier would suffice to prevent careless bumping into. For higher voltage, a box with a door connected to an interlock that cuts power when the door is open is better. To receive a shock, either the interlock would need to fail, or the operator would need to deliberately defeat it. In even higher voltage situations, where the shock could cause permanent damage or death, trusting that single interlock would be insufficient, and further protections would be needed.

In general, assume mistakes will happen, and design so that a serious failure requires multiple mistakes and/or multiple component failures. Never trust your life to a single component not failing.

Whether C "causes" buffer overruns is irrelevant semantics. Using plain C without strong static analysis is equivalent to working directly on exposed voltage. Perfectly fine when nothing bad will happen as a result of inevitable mistakes. Negligent when securing a bank.


>> When you design for safety, you assume mistakes will happen, and you design accordingly.

Yes, and if more people working in C did this then we'd be better off. As it is there are a lot of hacks and awful coders who shouldn't be let near a line of basic, let alone anything with security impact, but they still find their way into banking, payments and other financial, security critical code.

Call me naive, but I reckon these absolute muppets will find a way to f*ck up regardless of language technology. You can't force someone to write correct code who doesn't really understand WTF they're doing, and no framework or language is going to fix that.


You've constructed your argument in such a way that is both inarguable, and non-actionable. Yes, it's true that with sufficient patience, attention to detail, time, and money, you can write sophisticated, secure software in C.

However, in the real world, it turns out that programmers are "lazy" (which is a pretty condescending way to describe it, btw). It's a fact that programmers using C have been responsible for millions, if not billions of dollars in economic losses due to simple buffer overflows, integer overflows, use after free, etc. So is the solution to come up with something better than C, or is it to just tell people to stop being lazy and hope they stop producing these bugs?


The solution would be to stop hiring incompetents who shouldn't be allowed near a line of BASIC let alone anything else, from being allowed anywhere near sensitive code, regardless of language.


This is why non-profit organizations such as the Apache Foundation and the Linux Foundation are extremely important. They act as a conduit for funds and resources to these important projects.

Organizations like Team Cymru also play an important role in discovering and mitigating exploits in both open and closed source software.

Perhaps there should be an open source crypto foundation or a crypto umbrella at the Apache Foundation to help foster and secure these types of very important projects.


It's my understanding that you cannot get tax-exempt non-profit status purely for developing and distributing software, regardless of license.

It's my understanding that software-oriented tax-exempt organizations, 501(c)(3)s, like the Apache Foundation categorize themselves as educational foundations in order to get tax-exempt status. It's hinted in the article that it's a pain to do FOSS despite altruistic intentions. I'm surprise that this shortcoming of the tax code wasn't underscored more.

If we are serious about getting more funding for open source, we should be lobbying to get FOSS (for some definition) categorized as providing scientific benefit or else add a new category of non-profits that provide free software (for some definition of free).

http://en.wikipedia.org/wiki/501(c)_organization

http://en.wikipedia.org/wiki/Apache_Software_Foundation


This is definitely true. The Yorba Foundation is a recent relevant example where the tax code failed.

I think the risk that the IRS is trying to mitigate is for companies to establish a development only organization and single-licensing it to a shell to sell. We will need to find a nice dividing line that pushes FOSS causes forward while simultaneously preventing the creation of loop holes exploitable by for-profit entities.


> I think the risk that the IRS is trying to mitigate is for companies to establish a development only organization and single-licensing it to a shell to sell.

That seems easy to fix. Require that the software be available to the general public under the same terms. Which is probably how it already is, if I recall correctly a non-profit can't exist solely to benefit a private party.


There's a saying in the FOSS community: « Good, Cheap, Fast. Pick two. » A quick search on Google points to what seems to be called the Project Management Triangle:

https://en.wikipedia.org/wiki/Project_triangle

This author seems to imply that FOSS needs money in order to be good. That's not exactly true. It can help but that's not the only way.

He says for instance:

« It would not even be close to morally defensible to ask these people to forgo time to play with their kids or walk their dogs in order to develop and maintain the software that drives the profit in other people's companies. The right way to go—the moral way to go—and by far the most productive way to go is to pay the developers so they can make a living from the software they love. »

Sure, you can pay them. But you also can just be patient, and let them invest as little time as they want.

Free and Open Source Software is free so it's comprehensible that people usually don't work on it full time. But they do work on it and the result will turn out to be good. Eventually.


«It would not even be close to morally defensible to ask these people to forgo time to play with their kids or walk their dogs in order to develop and maintain the software that drives the profit in other people's companies. The right way to go—the moral way to go—and by far the most productive way to go is to pay the developers so they can make a living from the software they love.»

I'm... not really sure that "giving software away for free and then making impassioned posts on the Internet that people have a moral obligation to pay you to continue to work on it" is a strategy that has "success" written all over it.


"Eventually" is not sufficient, certainly not for security-related software. More generally, unless one is able to devote a sizable amount of time to a project, the "activation energy" threshold for good ideas to result in actual code is never reached.


Isn't this just running afoul of the fungibility of money causing issues placing a value on something?

That is, to focus on the money that OpenSSL gets as the only way that it is given value is to ignore all of the developers that directly contribute to it. Because, they aren't paid elsewise?


What if there was some kind of public/privately funded foundation that could hire open source developers to work on their projects on a part-time or full-time basis?

Something like those physicist think tanks that Feynman refused to join. Along those lines, maybe universities could have that kind of thing too.


I believe any of the Mozilla, Apache, FreeBSD, or OpenBSD Foundations more or less fit the bill.


VCs already fund those foundations you speak of via startups that try to sell "enterprise hadoop"


Interesting article - I liked how it was based on a personal experience. As a side note, there's a pretty serious misunderstanding of the Apache Software Foundation in the article. Apache provides no money to developers. Instead it provides community support, infrastructure, and legal help to more than 150 projects under its umbrella. In particular the Apache focus on building communities means that a project has a life beyond involvement of a single contributor or company. Fundraising for development is reasonable and helpful, but must be balanced with contributor diversity. Otherwise the the project will fall apart when the funding stops, leaving users in the lurch. (note - I'm an Apache Software Foundation member, though I speak for myself in this comment).


Non-free software is made by companies which are subject to national regulations. The "give us a backdoor to your security product or it will get very ugly for you." kind of regulation. Being non-free, those back doors are much harder to find than in Open Source SW. Even if you found a back door, publishing it would be risky, as the SW company would immediately sue you for violating the EULA.


Has anyone ever tried to push seriously to just have governments fund more open source software development? It just seems that open source software suffers from all the funding difficulties that go along with being a "public good" (in the economic sense). And there's a well-established mechanism for creating public goods: a government.


I think you'd be extremely hard pressed to make the argument that tax dollars should fund the development of Node, or WordPress, or a JavaScript calendar plugin.


How about software that much of the IT infrastructure of the country sort of relies on? The obvious example here is again OpenSSL. I don't think that the argument is that the government should fund _all_ open source software.


Maybe Universities should do more. As part of every undergrad CS course, get extra credit for finding and accurately reporting a bug in any well-known FOSS package. Bonus credit if it is a security-related bug. Even more credit for actually fixing the bug.


I would hate to see that happen. In a project with a sufficiently good review processes, the investment of time for helping somebody contribute the first patch(es) will far outweigh the benefit, and amortise only for developers who stay with the project for longer.

Add extrinsic motivation to contribute patches, like extra credit, and what you will end up doing is abusing experienced open source contributors as teaching assistants instead of an actual contribution to open source.

Let's not forget that one of the guys who did open source to further their academic career, only to drop it later, was Robin Seggelmann, the creator of Heartbleed.


There's certainly an argument for research on this topic. The core theory of open source quality is "many eyes" and in the similar case of Wikipedia, it works very well: much better than Encyclopaedia Britannica with it's long-term expert curation. Whether or not it works for open source largely depends on the competence of the individuals vs the complexity of the bugs. However, most bugs are caused by trivial errors that just aren't easy to spot and happen to pass the test cases. That's where a high-volume, low quality process like crowd sourcing can be very effective: someone, somewhere will spot those stupid errors, provided you have sufficient eyes on the code.

The idea that people should be discouraged from working on FOSS projects unless they plan to commit to one project is against the principles of open source, namely that it is open. In fact, if this were done over the course of a 3-4 year course, it's likely that each student would stay with one project anyway, and very likely they'd continue contributing after graduation. I agree that a "one off" exercise would just lead to an influx of inexperienced coders and "do my homework" questions on mailing lists, but that's not what I had in mind at all. Ultimately, any university implementing such a programme should be thinking about the net benefit to the community and doing things like penalising students for filing duplicate bug reports to mitigate against possible negative consequences.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: