Beta 5.2 was when I had the best time with Counter-Strike. de_dust with a Colt was fun. Never forget the AWP snipers lurking near the big front door in cs_assault. There were some weird maps like cs_siege — I think it had some sort of a moving vehicle there somewhere in a tunnel.
Daala was never meant to be widely adopted in its original form — its complexity alone made that unlikely. There’s a reason why all widely deployed codecs end up using similar coding tools and partitioning schemes: they’re proven, practical, and compatible with real-world hardware.
As for H.265, it’s the result of countless engineering trade-offs. I’m sure if you cherry-picked all the most experimental ideas proposed during its development, you could create a codec that far outperforms H.265 on paper. But that kind of design would never be viable in a real-world product — it wouldn’t meet the constraints of hardware, licensing, or industry adoption.
Now the following is a more general comment, not directed at you.
There’s often a dismissive attitude toward the work done in the H.26x space. You can sometimes see this even in technical meetings when someone proposes a novel but impractical idea and gets frustrated when others don’t immediately embrace it. But there’s a good reason for the conservative approach: codecs aren’t just judged by their theoretical performance; they have to be implementable, efficient, and compatible with real-world constraints. They also have to somehow make financial sense and cannot be given a way without some form of compensation.
It’s a bit like developing an F1 car. Or a cutting edge airplane. Lots of small optimizations that have to work together. Sometimes big new ideas emerge but those are rare.
Until the new codec comes to together all those small optimizations aren’t really worth much, so it’s a long term research project with potentially zero return on investement.
And yes, most of the small optimizations are patented, something that I’ve come to understand isnt’t viewed very favorably by most.
>> And yes, most of the small optimizations are patented, something that I’ve come to understand isn’t viewed very favorably by most.
Codecs are like infrastructure not products. From cameras to servers to iPhones, they all have to use the same codecs to interoperate. If someone comes along with a small optimization it's hard enough to deploy that across the industry. If it's patented you've got another obstacle: nobody wants to pay the incremental cost for a small improvement (it's not even incremental cost once you've got free codecs, it's a complete hassle).
That article is a scare piece designed to spread fear, uncertainty and doubt, to prop up an industry that has already collapsed because everyone else hated them, and make out that they’re the good guys and you should go back to how things were.
> The catch is that while the AV1 developers offer their patents (assuming they have any) on a royalty-free basis, in return they require users of AV1 to agree to license their own patents royalty-free back to them.
Such a huge catch that the companies that offer you a royalty-free license, only do so on the condition that you're not gonna turn around and abuse your own patents against them!
How exactly is that a bad thing?
How is it different from the (unwritten) social contracts of all humans and even of animals? How is it different from the primal instincts?
Who would develop those codecs? A good video coding engineer costs about 100-300k USD a year. The really good ones even more. You need a lot of them. JVET has an attendance of about 350 such engineers each meeting (four times a year).
Not to mention the computer clusters to run all the coding sims, thousands and thousands of CPUs are needed per research team.
People who are outside the video coding industry do not understand that it is an industry. It’s run by big companies with large R&D budgets. It’s like saying ”where would we be with AI if Google, OpenAI and Nvidia didn’t have an iron grip”.
MPEG and especially JVET are doing just fine. The same companies and engineers who worked on AVC, HEVC and VVC are still there with many new ones especially from Asia.
MPEG was reorganized because this Leonardo guy became an obstacle, and he’s been angry about ever since. Other than that I’d say business as usual in the video coding realm.
Who would write a web server? Who would write Curl? Who would write a whole operating system to compete with Microsoft when that would take thousands of engineers being paid $100,000s per year? People don't understand that these companies have huge R&D budgets!
(The answer is that most of the work would be done by companies who have an interest in video distribution - eg. Google - but don't profit directly by selling codecs. And universities for the more research side of things. Plus volunteers gluing it all together into the final system.)
Google funding free stuff is not a real social mechanism. It's not something you can point to and say that's how society should work in general.
Our industry has come to take Google's enormous corporate generosity for granted, but there was zero need for it to be as helpful to open computing as it has been. It would have been just as successful with YouTube if Chrome was entirely closed source and they paid for video codec licensing, or if they developed entirely closed codecs just for their own use. In fact nearly all Google's codebase is closed source and it hasn't held them back at all.
Google did give a lot away though, and for that we should be very grateful. They not only released a ton of useful code and algorithms for free, they also inspired a culture where other companies also do that sometimes (e.g. Llama). But we should also recognize that relying on the benevolence of 2-3 idealistic billionaires with a browser fetish is a very time and place specific one-off, it's not a thing that can be demanded or generalized.
In general, R&D is costly and requires incentives. Patent pools aren't perfect, but they do work well enough to always be defining the state-of-the-art and establish global standards too (digital TV, DVDs, streaming.... all patent pool based mechanisms).
> Google funding free stuff is not a real social mechanism.
It's not a social mechanism. And it's not generosity.
Google pushes huge amounts of video and audio through YouTube. It's in Google's direct financial interest to have better video and audio codecs implemented and deployed in as many browsers and devices as possible. It reduces Google's costs.
Royalty-free video and audio codecs makes that implementation and deployment more likely in more places.
> Patent pools aren't perfect
They are a long way from perfect. Patent pools will contact you and say, "That's a nice codec you've got there. It'd be a shame if something happened to it."
Three different patent pools are trying to collect licencing fees for AV1:
The question is more, "who would write the HTTP spec?" except instead of sending text back and forth you need experts in compression, visual perception, video formats, etc
Roughly 15,600 developers from more than 1,400 companies have contributed to the Linux kernel since the adoption of Git made detailed tracking possible
The Top 10 organizations sponsoring Linux kernel development since the last report include Intel, Red Hat, Linaro, IBM, Samsung, SUSE, Google, AMD, Renesas and Mellanox
---
curl does seem to be an outlier, but you still need to answer the question: "Who would develop video codecs?" You can't just say "Linux appeared out of thin air", because that's not what happened.
Linux has funding because it serves the interests of a large group of companies that themselves have a source of revenue.
(And to be clear, I do not think that is a bad thing! I prefer it when companies write open source software. But it does skew the design of what open source software is available.)
I've used and developed for Linux since 1994 (long before major commercial interests), and I work for Red Hat so it's unlikely I misunderstand how Linux was and is developed.
You could say "Linux was CREATED out of thin air", and I wouldn't argue with you.
But creation only counts for so much -- without support, Linux could still be a hobby project that "won't be big and professional like GNU"
I'm saying Linux didn't APPEAR out of thin air, or at least it's worth looking deeper into the reasons why. "Appearing" to the general public, i.e. making widely useful software, requires a large group of people over a sustained time period, like 10 years.
----
i.e. Right NOW there are probably hundreds of projects like Linux that you haven't heard of, which don't necessarily align with funders
I would actually make the comparison to GNU -- GNU is a successful project, but there are various efforts underneath it that kind of languish.
I'm saying that VIDEO CODECS might be structurally more similar to these projects, than they are to the Linux kernel.
i.e. making a freely-licensed kernel IS aligned with Red Hat, Intel, Google, but making an Intelligent Personal Assistant is probably not.
Somebody probably ALREADY created a good free intelligent personal assistant (or one that COULD BE as great as Linux), but you never heard of them. Because they don't have hundreds of companies and thousands of people aligned with them.
My point was, a lot of the early corporate support were smallish companies built specifically around Linux. RedHat is the perfect example of that, it started as a university project to make a distro.
It took a while (and a lot of pain) to get a lot of driver vendors to come fully into the project, yet Linux was already gaining a bunch of traction at that time (say last half of 90s).
I'll give you that Intel was always more or less a good actor though! But Google didn't exist when Linux already mattered. And when Google was created, they definitely benefited a lot from it, basing much of their infra on it.
Marketing needs (and laywer approval) can bring support faster than most things. Opus for audio is a good example of that too.
Are you really saying that patents are preventing people from writing the next great video codec? If it were that simple, it would’ve already happened. We’re not talking about a software project that you can just hack together, compile, and see if it works. We’re talking about rigorous performance and complexity evaluations, subjective testing, and massive coordination with hardware manufacturers—from chips to displays.
People don’t develop video codecs for fun like they do with software. And the reason is that it’s almost impossible to do without support from the industry.
> People don’t develop video codecs for fun like they do with software. And the reason is that it’s almost impossible to do without support from the industry.
As someone who lead an open source team (of majority volunteers) for nearly a decade at Mozilla, I can tell you that people do work on video codecs for fun, see https://github.com/xiph/daala
Working with fine people from Xiph.Org and the IETF (and later AOM) on royalty free formats Theora, Opus, Daala and AV1 was by far the most fun, interesting and fulfilling work I've had as professional engineer.
Daala had some really good ideas, I only understand the coding tools at the level of a curious codec enthusiast, far from an expert, but it was really fascinating to follow its progress
Actually, are Xiph people still involved in AVM? It seems like it's being developed a little bit differently than AV1. I might have lost track a bit.
People don't develop video codecs for fun because there are patent minefields.
You don't *have* to add all the rigour. If you develop a new technique for video compression, a new container for holding data, etc, you can just try it out and share it with the technical community.
Well, you could, if you weren't afraid of getting sued for infringing on patents.
> Are you really saying that patents are preventing people from writing the next great video codec? If it were that simple, it would’ve already happened.
You wouldn't know if it had already happened, since such a codec would have little chance of success, possibly not even publication. Your proposition is really unprovable in either direction due to the circular feedback on itself.
I don't do video because I don't work with it, but I do image compression for fun and no profit. I do use some video techniques due to the type of images I am compressing. I don't release because of the minefield. I do it because it's fun. The simulation runs and other tasks often I kick to the cloud for the larger compute needs.
> People don’t develop video codecs for fun like they do with software. And the reason is that it’s almost impossible to do without support from the industry.
Hmm, let me check my notes:
- Quite OK Image format: https://qoiformat.org/
- Quite OK Audio format: https://qoaformat.org/
- LAME (ain't a MP3 Encoder): https://lame.sourceforge.io/
- Xiph family of codecs: https://xiph.org/
Some of these guys have standards bodies as supporters, but in all cases, bigger groups formed behind them, after they made considerable effort. QOI and QOA is written by a single guy just because he's bored.
For example, FLAC is a worst of all worlds codec for industry to back. A streamable, seekable, hardware-implementable, error-resistant, lossless codec with 8 channels, 32 bit samples, and up to 640KHz sample rate, with no DRM support. Yet we have it, and it rules consumer lossless audio while giggling and waving at everyone.
On the other hand, we have LAME. An encoder which also uses psycho-acoustic techniques to improve the resulting sound quality and almost everyone is using it, because the closed source encoders generally sound lamer than LAME in the same bit-rates. Remember, MP3 format doesn't have an reference encoder. If the decoder can read the file and it sounds the way you expect, then you have a valid encoder. There's no spec for that.
> Are you really saying that patents are preventing people from writing the next great video codec?
Yes, yes, and, yes. MPEG and similar groups openly threatened free and open codecs by opening "patent portfolio forming calls" to create portfolios to fight with these codecs, because they are terrified of being deprived of their monies.
If patents and license fees are not a problem for these guys, can you tell me why all professional camera gear which can take videos only come with "personal, non-profit and non-professional" licenses on board, and you have pay blanket extort ^H^H^H^H^H licensing fees to these bodies to take a video you can monetize?
For the license disclaimers in camera manuals, see [0].
Patents, by design, give inventors claims to ideas, which gives them the money to drive progress at a pace that meets their business needs.
Look at data compression. Sperry/Univac controlled key patents and slowed down invention in the space for years. Was it in the interest of these companies or Unisys (their successor) to invest in compression development? Nope.
That’s by design. That moat of exclusivity makes it difficult to compensate people to come up with novel inventions in-scope or even adjacent to the patent. With codecs, the patents are very granular and make it difficult for anyone but the largest players with key financial interests to do much of anything.
> It’s like saying ”where would we be with AI if Google, OpenAI and Nvidia didn’t have an iron grip”.
We'd be where we are. All the codec-equivalent aspects of their work are unencumbered by patents and there are very high quality free models available in the market that are just given away. If the multimedia world had followed the Google example it'd be quite hard to complain about the codecs.
That’s hardly true. Nvidia’s tech is covered by patents and licenses. Why else would it be worth 4.5 trillion dollars?
The top AI companies use very restrictive licenses.
I think it’s actually the other way around and AI industry will actually end up following the video coding industry when it comes to patents, royalties, licenses etc.
Because they make and sell a lot of hardware. I'm sure they do have a lot of patents and licences, but if all that disappeared today it'd be years to decades before anyone could compete with them. Even just getting a foot in the door in TSMC's queue of customers would be hard. Their valuation can likely be justified based on their manufacturing position alone. There is literally no-one else who can do what they do, law or otherwise.
If it is a matter of laws, China would just declare the law doesn't count to dodge around the US chip sanctions. Which, admittedly, might happen - but I don't see how that could result in much more freedom than we already have now. Having more Chinese people involved is generally good for prices, but that doesn't have much to do with market structure as much as they work hard and do things at scale.
> The top AI companies use very restrictive licenses.
The top AI companies don't release their best models under any license. They're not even distributed at all. If you did steal the weights out from underneath Anthropic they would take you to court and probably win. Putting software you develop exclusively behind a network interface is a form of ultra-restrictive DRM. Yes, some places are currently trying to buy mindshare by releasing free models and that's fantastic, thank you, but they can only do that because investors believe the ROI from proprietary firewalled models will more than fund it.
NVIDIA's advantage over AMD is largely in the drivers and CUDA i.e. their software. If it weren't for IP law or if NVIDIA had foolishly made their software fully open source, AMD could have just forked their PTX compiler and NVIDIAs advantage would never have been established. In turn that'd have meant they wouldn't have any special privileges at TSMC.
I'm not opposed to codecs having patents but Chiariglione set up a system where each codec has as many patent holders as possible and any one of those patent holders could hold the entire world hostage. They should have set up the patent pool and pricing before developing each codec and not allowed any techniques in the standard that aren't part of the pool.
> Who would develop those codecs? A good video coding engineer costs about 100-300k USD a year. The really good ones even more. You need a lot of them.
How about governments? Radar, Laser, Microwaves - all offshoots of US military R&D.
There's nothing stopping either the US or European governments from stepping up and funding academic progress again.
It seems that you have a massive misunderstanding of how this works.
University research labs, usually with a team of no more than 10 people (at most 20), are good at producing early, proof-of-concept work, but not incredibly complex projects like creating an actual codec. They are not known for producing polished, mature commerical products that can be immediately used in the real world. They don't have the resources or the incentive to do so.
> They don't have the resources or the incentive to do so.
Of course they have. Guess how MP3 was developed - an offshoot of the German Fraunhofer Institute and FAU Nürnberg-Erlangen, amongst others.
The fact that no one seems to even be able to imagine how funding anything from the government could even work (despite that era being just a few decades ago) is shocking.
Exactly. Not long ago, someone showed up on Hacker News who had, on his own, begun to rediscover the benefits of arithmetic coding. Naturally, he was convinced he’d come up with a brand-new entropy coding method. Well, no harm done and it’s nice that people study compression but I was surpised how easily he got himself convinced of a discovery. Clearly he knew very little.
Overall, I think this is a positive ”problem” to have :-)
I've had several revolutionary discoveries during my time programming. In each case, after the euphoria had settled a bit, I asked myself: Why aren't we already doing this? Why isn't this already a thing? What am I missing?
And lo and behold, in each case I did find that it was either not novel at all or it had some major downside I had initially missed.
Still, fun to think about new ways of doing things, so I still go at it.
I mean, I think it would be a positive problem to have if people were actually learning things and growing, but... honestly this doesn't seem to be the case, what I see here is AI-generated marketing fluff about a "brand new format" that does something that off-the-shelf software already does, that doesn't actually fit the intended use-case, (all of which would be fine if it wasn't-) generated also by AI.
> Security patrol will come and bother you if you hand around the bridge for a few minutes?
There’s a land war in Europe. Hundreds of thousands have lost their lives during the past few years. There have been cases of sabotage against the Baltic states as well as the Nordic states. Things are pretty grim there and lurking around basic infrastructure pretty much guarantees a talk with the police.
Plus Estonia in particular is 200km away from St Petersberg, and 800km from Moscow. They are all but guaranteed to succumb to Russian expansion if allowed to continue unchecked.
And it's not just Europe either. The US has suffered from multiple attacks on electrical substations [1] as well with unknown perpetrators (the suspicion is white supremacists), and on top of that come rednecks shooting at power lines and god knows what else.
Paranoia surrounding critical infrastructure is skyrocketing at the moment, and I'd say for a bunch of very good reasons.
My eyes were opened when a field called gamification appeared in the early 2010s. In a few years many gamification researchers had tens of thousands of citations, h-indexes nearing hundred. Well, if you think about it they’re gamers, they’ve been grinding their RPG characters, sniping skills and whatnot for thousands of hours. It’s only natural that these guys and girls figure out how to reach the maximum scientific high score.
Some of the gamification researchers are near the top 500 of that 2% list. Now ask yourself, is gamification something that should make you one of the top 500 scientist in the world? I doubt it, but modern science is a citation game. Nothing else.
Gamification as a field of study also reminds me a lot of this video[1] of a talk about tips for game developers. It is very interesting because it is from 2016 which is close to the time period you mentioned. I’m pretty sure this video has been shared on Hacker News a bunch.
reminds me of the fraudulent Ariely and Gino papers that were exposed a year or two ago, where a very common comment was "they're dishonesty researchers, of course they are going to be dishonest".