Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> it is useful for the compiler to force you to handle the case in which pointers are null.

Well I agree that it's very useful (and we have that in Nim), but..

> With constructs like Option::map the code is usually even less verbose than the equivalent code with null.

I'm still not convinced of this part. It certainly hasn't been the case with the, admittedly small amount of, Rust code I've seen. However, I'll look for more comparisons in the future (or offer Nim comparisons to Rust snippets anyone posts). Point is, nil is still a useful and commonly used tool. So the argument for verbosity and conveniences is relevant, IMO.

> With null, the semantics of the language are that an exception can be thrown [1] whenever those constructs are invoked. That's pretty much objectively easier to reason about.

That completely depends on how often you want to use nil refs, and how easy they are to use. Like I said in another response, I agree Rust's design may be better for some domains, but I certainly wouldn't call it "objectively" easier to reason about in a general sense.

> Huh? Lifetimes are totally independent.

Well like my post implied, I was only guessing as to the design. And it's interesting to hear that it takes advantages of special compiler optimizations. That said, I still don't see how it's completely decoupled from the life-time system.. you're saying that if I have a Option<> reference to a mutable list in Rust, the compiler can determine weather or not the list is 'frozen' based on the runtime state of that reference?

> Or you could do what Nim does, and make dereferencing null undefined behavior.

I didn't think derefing nil was undefined behavior. I thought only dereferencing a pointer which points to once-valid-but-now-free memory was undefined behavior, and that situation is covered by GCed refs. Can you explain this a bit?

EDIT:

> Not in my experience. They show up in production all the time.

I did say 'rarely', and I drew a comparison to bounds-check crashes, which surely also show up in production.



> Point is, nil is still a useful and commonly used tool. So the argument for verbosity and conveniences is relevant, IMO.

The only advantage of having null references is that the pattern "if this reference is null, dereference it; otherwise throw an exception" is shorter. But the question is: how often do you want that pattern? In a robust program, the answer to that is "rarely".

Put another way, it would be trivial to add sugar for the ".unwrap()" pattern to Rust (perhaps with the "!" operator) if it were necessary, gaining back the only verbosity-related advantage of null pointers. But nobody in the Rust community is asking for it. That's because this pattern is rare. If it were a problem, someone would have at least submitted an RFC by now!

> I certainly wouldn't call it "objectively" easier to reason about in a general sense.

If you write down, formally, what the star or dot operators do, there are strictly more steps involved when you have null pointers. That's why a language without null is objectively easier to reason about.

> if I have a Option<> reference to a mutable list in Rust, the compiler can determine weather or not the list is 'frozen' based on the runtime state of that reference?

I don't know what this means. Lifetimes rule out dangling pointers. They don't have anything to do with nullability. The borrow checker only cares about the structure of your data enough to construct loan paths.

> Can you explain this a bit?

Dereference of null is undefined behavior in C, and Nim compiles to C code that blindly dereferences pointers without inserting null checks. So dereference of null is UB in Nim too. In an earlier comment I was able to construct a Nim program that exhibited very different behavior in debug and optimized builds, using nothing but GC'd pointers.

> I did say 'rarely', and I drew a comparison to bounds-check crashes, which surely also show up in production.

Actually, Rust does try to prevent indexing-related issues by preferring iterators to raw array indexing. But, in any case, the comparison isn't relevant for a couple of reasons. First of all, in a general sense if you have big problems A and B, the fact that you can't solve B isn't an excuse to not solve A. More specifically, though, the amount of type system machinery needed to fully eliminate bounds check failures is much higher than that needed to eliminate null pointer exceptions—you basically need dependent types, whereas to eliminate null pointers all you need are bog-standard algebraic data types, which have existed since the 70s.


> there are strictly more steps involved when you have null pointers.

Well yes, and both Nim and Rust have non-nil pointers.. I suppose I misread your original statement as "Rust is objectively better ..." when you actually just said non-nil vars are an objectively better design pattern in general. My mistake.

Our argument seems to stem around the two assertions (one from you, and one from me), those are: "nil vars are be rare (in optimally written code)", and "Rust's way of working with 'nil' vars is verbose". I suppose I'll concede that non-nil vars is a better default (though I will hold reservation until I see more real statistics, I don't find "no RFC yet!" as hugely convincing), but I also feel Rust could do a better job of giving access to "nilable" vars when they're needed.

> I don't know what this means. Lifetimes rule out dangling pointers...

I mean, Rust prevents you (via compile-time mechanisms) from mutating a variable while it's borrowed by another reference.. If that reference is Option<>, it's only known at runtime weather or not a reference has actually borrowed said varaible. Rust must either treat every Option<> reference as a potential 'loan path', which would significantly diminish their usefulness as a references, encouraging indexing for these scenarios, which leads to almost identical potential for out-of-bounds crashes... or it's relying on some kind of more complex mechanism (lifetime vars maybe?).. or additional runtime overhead.

I really don't know enough about Rust to know how far off-base that is. So any clarity is appreciated.

> In an earlier comment I was able to construct a Nim program that exhibited very different behavior in debug and optimized builds, using nothing but GC'd pointers.

I remember this comment, but I didn't remember it achieving UB in debug code.. I'll look through the history and take another look.


> Rust must either treat every Option<> reference as a potential 'loan path', which would significantly diminish their usefulness as a references, encouraging indexing for these scenarios, which leads to almost identical potential for out-of-bounds crashes... or it's relying on some kind of more complex mechanism (lifetime vars maybe?).. or additional runtime overhead.

Can you give a concrete example of this? I'm a bit confused, but it might just be a terminology thing. In Rust, `Option<T>` does not imply a reference. If you have a `Option<i32>` there are no references involved. An `Option<T>` also owns the `T` if there is one. You can get a reference to it, but you have to check that it indeed holds a `T` (via `match` or `match` using functionality).

Note: I should clarify: What's confusing me is the indexing stuff. I'm not sure if this is referring to something about the `Option<T>` or something else.


> I should clarify: What's confusing me is the indexing stuff. I'm not sure if this is referring to something about the `Option<T>` or something else.

By indexing, I meant as an alternative to references.. For example, if you had a Sprite type which held a 'reference' to a Texture in your game's Texture list.. as soon as you allocate a Sprite it must borrow a reference to a Texture, preventing any future mutation of the Textures list for the lifetime of the Sprite, which obviously is too restricting for most games.. so the alternative is to have the Sprite simply hold an index to an item in the array instead, but this basically comes with the same pitfalls as nilable refs (ie, if you accidentally change it, your program can crash due to bounds-checking errors.. or end up with visual glitches.. not sure which would be more annoying).

The other alternative is to use an Option<&Texture> instead. However, I'm not familiar enough with Rust to know of the restrictions here, or even if that's possible (taking a look at the docs, it looks like it's possible, but life-time vars come into play, which could complicate things).


Rust solutions would probably be the following: Some kind of runtime assistance (`Rc<T>`, `Arc<T>` et al), using indices as you mentioned (though with `list.get(index)` you'd still have to deal with the fact that it might not be valid, since `get` will return an `Option<T>`)[1] Another solution might be to allocate the textures in an arena that lives outside of the scope of your game logic, and have both the texture list and sprites contain borrows (Note I'm not sure about this, as I haven't done much with arenas yet).

Although I'm unsure where the `not nil` as discussed above comes into play here. What part in Nim would be `nil` here where Rust would have `Option<T>`? The difference between `Option<&Texture>` and `&Texture` is that you have to somehow deal with the possibility of no texture when handling the former.

[1] I should note that actual indexing behavior (`list[index]`) will assume you know there is one in there and panic if it isn't. This is one of the things I dislike and hope there will be an optional (no pun intended) lint post-1.0.


> I remember this comment, but I didn't remember it achieving UB in debug code.. I'll look through the history and take another look.

It's: https://news.ycombinator.com/item?id=9050999


So I remembered correctly, Nim does not reach UB in non-release code (or rather, code without --boundCheck:on), it throws an exception. I still think this is a reasonable solution. We catch these errors during development iteration or enable it for safety-critical portions of release code (or the entire project).. and we can opt-out of these checks if we need the performance and safety isn't as important (games, simulations, etc).

I remember Rust does not bounds-check it's iterators, so you don't need to really disable bounds-checks (indeed, you cannot) while Nim, currently, does this more niavely and looses some performance for it. That's a nice thing Rust does, but not something Nim can't eventually catch up too. See this comparison for futher reference: http://arthurtw.github.io/2015/01/12/quick-comparison-nim-vs...


> So I remembered correctly, Nim does not reach UB in non-release code (or rather, code without --boundCheck:on), it throws an exception.

That's not really correct. It's undefined behavior either way; you're just getting lucky because the compiler doesn't happen to take advantage of the undefined behavior to perform optimizations at -O0.


I'm not sure what you're implying.. you can turn on most optimizations and still keep nil-checks on in Nim (either the whole project via --nilChecks:one, or select portions of code via {.push.}).

Unless you're claiming your example was still hiting UB, even with nil-checks on, and just happened to throw an exception by random chance, I'm not really sure how you figure UB is still happening here (since the exception will be thrown, preventing the deref). Nothing is preventing you from using nil-checks in production code.


> In an earlier comment I was able to construct a Nim program that exhibited very different behavior in debug and optimized builds, using nothing but GC'd pointers.

This intrigued me so I found the comment: https://news.ycombinator.com/item?id=9050999




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: