>It had passed through our new modeling tools, through two different intermediate converter programs, had been loaded up as a complete database, and been rendered through a fairly complex scene hierarchy, fully textured and lit (though there were no lights, so the triangle came out looking black). The black triangle demonstrated that the foundation was finally complete the core of a fairly complex system was completed, and we were now ready to put it to work doing cool stuff.
Wow, talk about big design up front! This sounds like the opposite of today's "MVP first, get something into the customer's hands RIGHT NOW and iterate later" way of doing software development: YAGNI, KISS, etc. Nice reminder of how things used to be done (and still are done in some verticals). I'm having trouble recalling any projects I worked on recently where a "Black Triangle" moment was even possible, given the usual focus on early, customer-visible wins.
I think the key element here is that it describes a moment where the problem space opened up and original engineering became necessary. When this happens, programmers tend to swarm over the problem for a few years, doing a ton of invention, until it's been heavily covered; and then afterwards the majority use an off-the-shelf option, with a few experts who stick behind and do maintenance.
This cycle defines technologies that are in their earliest stages, but as it becomes more of a commodity, the MVP comes to the forefront, since it allows marketing to lead the technology from the beginning.
With game engines, the problem space is ultimately specified by game design ideas, so it's extremely easy to enter the territory of invention by coming up with something that hasn't been done and saying "I will be the first one to do that!" At that point you have no alternative but to reach the black triangle, and once you've reached it, that's the engine. That point is the starting line for a commodity-code project. Everything after that is the "other 90%" of making it fleshed out and usable.
Hahahahahaha, 'MVP' doesn't exist in hardware or low-level hardware coding (e.g. this, which is a playstation game that presumably runs bare-metal).
In hardware, it's generally like ~80% of the design work before you even have a viable demo. If you're writing a bare-metal rendering engine, it's very much the same. MVP isn't used because there is none of the infrastructure even available to have one.
TL;DR MVP isn't possible if you don't have the infrastructure.
Not sure I agree. It is certainly possible to hack up the basic functionality of an embedded system; full of magic numbers and lacking the elegant hardware abstractions, OS layer and power management we all want.
Edit: Not to mention in hardware, where you can neglect design for manufacturing, solder modwires everywhere, and power everything off a 10kg power supply. MVPs do exist in hardware and embedded SW.
I think there's a difference between an MVP and a proof of concept.
You can hack together a one-off PoC for a hardware feature, in order to know the feature is viable--but what you've created isn't a viable product, minimal or otherwise. With an MVP, you'd expect to be able to incrementally improve the design into a non-minimal one, but with a PoC there's no real way to take it and "refactor" it into a shipping product at all. The design with modwires and a 10kg power supply (or the embedded-software equivalent) doesn't share any similarities, except at the most abstract level, with the design that can be mass-manufactured.
Once you're done experimenting and you know what you want, you really do need to design the "production model" from the top down, with all the layers that will enter into the solution considered in advance.
A concrete example: you can't just "write a voxel renderer" and then expand it into a full game engine. If the full game engine will have a rendering pipeline with support for things like dynamic lighting and fur physics, then voxel rendering will be implemented in that engine in a completely different way than in one that only needs to render voxels.
Editing one into the other would be like editing the code defining a b-tree into code for a hash-table: there would be exactly one edit, which would replace all the old lines with entirely new ones.
Cutting edge 3D game engines are incredibly complex pieces of software, and this was at the beginning of the 3D game revolution where there weren't any third party engines available, no one really knew what they were doing, and most developers didn't even have internet access; you did almost everything in house or you didn't have a game. It was completely uncharted territory, not some #yolo agile web development thing where 99% of the engineering effort is already done for you and you can just focus on the content.
This was a long time ago (1994), but even today, if you're writing a 3D renderer from the ground up, your polys are gonna pass through a big ol' pipeline to get onto the screen. No way around it.
Anyway, you don't want to be iterating on this stuff too much once it's rolled out - it tends to make the art team unhappy. They'd far rather work without the tools constantly changing underneath them.
Game programming does have engines and frameworks to get you off the ground quickly, and the early access stuff definitely feels like an equivalent to the "MVP first, iterate later" approach. This approach is only recently becoming viable because of digital distribution on PCs and on consoles. Before that you shipped a disk, and that was it. So you didn't get the chance to do any real iterating.
It wasn't until maybe 5 years ago that the out of the box engines were any good. Before that you either had to pay a lot of money (Unreal Engine, id Tech, Cryengine... also Flash!) or you rolled your own stuff out of the libraries that were lying around (Ogre3D, OpenAL, irrlicht). Once Unity and other engines with low entry points became common, things got a lot better.
But game programming also frequently runs into another issue: performance. Your whole game world has to update in 16-33 milliseconds, so you end up dealing with some very low level problems because fitting everything into 16ms can be really hard. Just take Minecraft as an example.
There are some problems where you just have to write a lot of groundwork before you can see anything for it and pushing something out fast wouldn't really have any benefit.
I think black triangles are only really possible if you're working on software architecture - that bit when you "go into the tunnel" and are working in purely abstract space comes with such a payoff once you reach your project's "black triangle". There's really nothing like it, it's a great feeling!
We get a lot of these in AI research. When your math is a hideous snarl of abstract algebra and graph structures and your code includes a novel 3d dynamics engine and custom collision detection, it's exhilarating to see your simulated robot so much as twitch a finger in the right direction.
Oh man, I've needed a term for this for so long and have never realized it! Thanks!
I remember one time, similar to the story linked but I was refactoring rather than coding something new. I made a major breakthrough and, like the people in the article, I was very excited about it. Coworkers (who do low-level stuff - I do FP and CSS stuff) didn't see what was so great (it was clearly visually broken) until I said "Yes, but it's broken in just the right way that I can fix it!" They understood instantly.
I think these kind of "black triangle successes" are something only programmers and mathematicians experience (they are the only two types of people that I've known to sympathize with the feeling.) It's like an invisible thread that connects all types of programmers, from FP guys to kernel guys to web developers to the three people still using Smalltalk to the guys in suits making InterfaceFactoryFactories. I love it.
The same thing happens with building construction. If you've ever watched a building being built somewhere near where you live or work, you've seen it: Months go by with it as a vacant lot. Then one day someone cuts the weeds down, and there are some flags sitting around. Then tons of time goes by with bulldozers, and then interminable fiddling around with digging holes and foundation laying. No progress is apparent for a long time, then one day, "whoosh!" there's a building there, once the framework goes up.
There, too, the 80/20 rule applies just as it does in software. Because once that framework goes up and there's all this fast progress, the next step is months of finish work where-- from casual observation from the outside, at least-- things look pretty much the same while all the details are wrapped up.
Oh, this is back online? I looked for it earlier this year after I realised that ‘black triangle’ wasn’t a known idiom among my colleagues, but had to dig into web.archive.org to find it.
No, because this represents a good deal of engineering coming together and actually functioning in concert with itself and working.
It's not just some "woot let's crush some code" and a weekend later everything works and that one difficult feature is working well enough for your live demo; this is having several different layers of functionality coming together and working in unison, each being relatively worthless without the rest of the whole.
When you're building something like a high-performance 3D pipeline, or a low-level embedded system, there's no MVP that really counts and there's seldom any little features to egg you on until these black triangle moments.
It's like being an aerospace engineer--the whole damn thing works, together and at once, or nothing does. That's why I've found web development to be such a nice break from the work I used to do: I can always make seeming progress, because I can always see the results of what I'm working on.
At the same time, I never get the same rush of accomplishment of seeing, say, a model loaded from a custom conversion tool rendered using multiple threads and being lit properly. It's a different sort of work.
Wow, talk about big design up front! This sounds like the opposite of today's "MVP first, get something into the customer's hands RIGHT NOW and iterate later" way of doing software development: YAGNI, KISS, etc. Nice reminder of how things used to be done (and still are done in some verticals). I'm having trouble recalling any projects I worked on recently where a "Black Triangle" moment was even possible, given the usual focus on early, customer-visible wins.