> It seems to me... that initialization time is unlikely to be significant
The thing is, initialization cost is a lot more than you think it is, especially when it's done on a per-object level rather than a "group" level.
This is kind of the point of trying to make the zero value useful, it's trivially initialized. And in languages that are much more strict in their approach, it is done at that per-object level which means instead of the cost of initialization being anywhere from free (VirtualAlloc/mmap has to produce zeroed memory) to trivially-linear (e.g. memset), to being a lot more nested hierarchies of initialization (e.g. for-loop with constructor for each value).
It's non-obvious why the "strict approach" would be worse, but it's more about how people actually program rather than a hypothetical approach to things.
So of course each style is about trade-offs. There are no solutions, only trade-offs. And different styles will have different trade-offs, even if they are not immediately obvious and require a bit of experience.
That's what I was trying to get at by talking about making ZII opt-in. If you're using a big chunk of memory — say a matrix, or an array of matrices — it's a win if you can zero-initialize it cheaply or for free, sure. In JS, for example, you'd allocate an ArrayBuffer and use it immediately (via a TypedArray or DataView).
But still, in other parts of the program, ZII is bad! That local or global variable pointing at an ArrayBuffer should definitely not be zero-initialized. Who wants a null pointer, or a pointer to random memory of unknown size? Much better to ensure that a) you actually construct a new TypedArray, and b) you don't use it until it's constructed.
I guess if you see the vast majority of your action happening inside big arrays of structs, pervasive ZII might make sense. But I see most of the action happening in local and temporary variables, where ZII is bad and explicit initialization is what you want.
Moving from JavaScript to TypeScript, to some extent you can get the best of both worlds. TS will do a very good (though not perfect) job of forcing you to initialize everything correctly, but you can still use TypedArray and DataView and take advantage of zero-initialization when you want to.
ZII for local variables reminds me of the SmallTalk / Obj-C thing where you could send messages to nil and they're silently ignored. I don't really know SmallTalk, but in Obj-C, to the best of my knowledge most serious programmers think messages to nil are a bad idea and a source of bugs.
Maybe this is another aspect where the games programming mindset is skewing things (besides the emphasis on low-level performance). In games, avoiding crashes is super important and you're probably willing to compromise on correctness in some cases. In most non-games applications, correctness is super important, and crashing early if something goes wrong is actually preferable.
Making it opt-in, means making the hierarchical approach the default. Whatever you make "opt-in" means you are by default discouraging its use. And what you are suggesting as the default is not what I wanted from Odin (I am the creator by the way).
I normally say "try to make the zero value useful" and not "ZII" (which was a mostly jokey term Casey Muratori came up with to reflect against RAII) because then it is clear that there are cases when it is not possible to do ZII. ZII is NOT a _maxim_ but what you should default to and then do something else where necessary. This is my point, and I can probably tell you even more examples of where "ZII is bad" than you could think of, but this is what is a problem describing the problem to people: they take it as a maxim not a default.
And regarding pointers, I'm in the camp that nil-pointers are the most trivial type of invalid pointer to catch empirically speaking. Yes they cause problems, but because how modern systems are structured with virtual memory, they are empirically trivial to catch and deal with. Yes you could design the type system of a language to make nil-pointers not be a thing unless you explicit opt into them, but then that has another trade-off which may or may not be a good thing depending on the application.
The Objective-C thing is just a poorly implemented system for handling `nil`. It should have been more consistent but wasn't. That's it.
I'd argue "correctness" is important in games too, but the conception of "correctness" is very different there. It's not about provability but testability, which are both valid forms of "correctness" but very different.
And in some non-game applications, crashing early is also a very bad thing, and for some games, crashing early is desired over corrupted saves or other things. It's all about which trade-offs you can afford, and I would not try to generalize too much.
Yeah, that's fair, clearly this sort of thing is why we have multiple languages in the first place!
I don't think I'll ever abandon the idea that making code "correct by construction" is a good goal. It might not always be achievable or practical but I strongly feel it's always something to aim for. For me, silent zero initialization compromises that because there isn't always a safe default.
I think nil pointers are like NaNs in arithmetic. When a nil or a NaN crops up, it's too late to do anything useful with it, you generally have to work backwards in the debugger to figure out where the real problem started. I'd much rather be notified of problems immediately, and if that's at compile time, even better.
In the real world, sure, I don't code review every single arithmetic operation to see if it might overflow or divide by zero. But when the compiler can spot potential problem areas and force me to check them, that's really useful.
That would require having constructors, which is not something Odin will ever have nor should it. However you can just initialize with a constant or variable or just use a procedure to initialize with. Odin is a C alternative after all, so it's a fully imperative procedural language.
Why would it require constructors? As opposed to simply enforcing that it always be initialized with a constant/variable/procedure/etc rather than zeroed.
> you can just initialize with a constant or variable or just use a procedure to initialize with.
Is there an option to leave something uninitialized? I often find the allocation of explicitly uninitialized objects to be a performance necessity in tight loops when I'm working with numpy.
> ZII for local variables reminds me of the SmallTalk / Obj-C thing where you could send messages to nil and they're silently ignored. I don't really know SmallTalk, but in Obj-C, to the best of my knowledge most serious programmers think messages to nil are a bad idea and a source of bugs.
Yet, after a decade of embracing Swift that tries to eliminate that aspect of nil handling in obj-c, Apple software is buggier than it's ever been. Perhaps not crashing on every nil in large complex systems does lend itself to more stable system.
The thing is, initialization cost is a lot more than you think it is, especially when it's done on a per-object level rather than a "group" level.
This is kind of the point of trying to make the zero value useful, it's trivially initialized. And in languages that are much more strict in their approach, it is done at that per-object level which means instead of the cost of initialization being anywhere from free (VirtualAlloc/mmap has to produce zeroed memory) to trivially-linear (e.g. memset), to being a lot more nested hierarchies of initialization (e.g. for-loop with constructor for each value).
It's non-obvious why the "strict approach" would be worse, but it's more about how people actually program rather than a hypothetical approach to things.
So of course each style is about trade-offs. There are no solutions, only trade-offs. And different styles will have different trade-offs, even if they are not immediately obvious and require a bit of experience.
A good little video on this is from Casey Muratori, "Smart-Pointers, RAII, ZII? Becoming an N+2 programmer": https://www.youtube.com/watch?v=xt1KNDmOYqA