Hacker Newsnew | past | comments | ask | show | jobs | submit | nemaar's commentslogin

> If you design a language that doesn't allow pointers to be shared across threads at all, then you wouldn't need a borrow checker.

Is that actually true? I'm pretty sure you need the borrow checker even for single threaded Rust to prevent use after frees.


Theres even more than just UAFs that you have to worry about in a single threaded context, but yes you are correct.

Here's a good post that talks about why shared mutability even in single threaded contexts is dangerous: https://manishearth.github.io/blog/2015/05/17/the-problem-wi...


You may have to define your terms more carefully and extend past "doesn't allow pointers", but it can be true. Look to Erlang; the key to Erlang's safety isn't actually its immutable values, but the fact that no messages can travel between the various "processes" (threads in conventional parlance, just not "OS threads") that carry any sort of reference or pointer is what makes it so Erlang is safe without any concept of borrow checking. It also semantically copies all values (there are some internal optimizations for certain types of values that make it so they are technically not copied, but at the language level, it's all copies) and each process is GC'd, so in terms of the Rust borrow checker I mention this only in the context of cross-thread sharing safety.

But in general, if threads can't communicate any sort of pointer or reference that allows direct, unmediated access to the same value that some other thread will see, there's no need for a "borrow checker" for thread safety.

(Note that "but what if I have a thing that is just a token for a value that people can potentially read and write from anywhere?" is not an exception to this, because in this context, such access would not be unmediated. This access would still require messages to and from the "holding" process. This sort of safety won't stop you from basically deliberately re-embedding your own new sorts of races and unsafe accesses in at the higher level of your own code, it just won't be a data race in the same way that multiple threads reading and writing through the same pointer is a data race at the lower level. The main solution to this problem is "Doctor, it hurts when I do this." -> "Don't do that.")


Let me clarify: "wouldn't need a borrow checker for the specific requirement of ensuring thread safety". Clearly the borrow checker is quite useful in single-threaded contexts on its own. :P The point is just that it's perfectly valid to have a language that doesn't have "reference semantics" at all.


> something as powerful as what I created

Could you give us more detail? It sounds intriguing.


I developed a new static analysis (a type system, to be precise) to guarantee statically that a concurrent/distributed system could fail gracefully in case of (D)DoS or other causes of resource exhaustion. Other people in that field developed comparable tools to statically guarantee algorithmic space or time complexity of implementations (including the good use of timeouts/resource sandboxes if necessary). Or type system-level segregation between any number of layers of classified/declassified information within a system. Or type systems to guarantee that binary (byte)code produced on a machine could find all its dependencies on another machine. Or type systems to prove that an algorithm was invariant with respect to all race conditions. Or to guarantee that a non-blocking algorithm always progresses. Or to detect deadlocks statically. etc.

All these things have been available in academia for a long time now. Even languages such as Rust or Scala, that offer cutting edge (for the industry) type systems, are mostly based on academic research from the 90s.

For comparison, garbage-collectors were invented in the 60s and were still considered novelties in the industry in the early 2000s.


Is there a good resource (a review paper maybe?) to get an overview over such programming language / type system topics?


I can't think of any from the top of my head.

Perhaps looking at the proceedings of ICFP and POPL can help?


I don't follow influencers but my guess is that they already do this, at least they use filters. If someone can use all these tools to gain considerable amount of fame and fortune, is (s)he really not intelligent? Of course, all these online personas will be lies, even bigger lies than today but I don't think it really matters. I'd argue that most people following these contents are not looking for reality.


Most functional languages parse a b c d e f as a(b, c, d, e, f), it does not matter what b, c, d, e, f are. Do you know any language where this is different?


No functional languages do that.

OCaml parses a b c as ((a b) c). In case the compiler can determine that a is a function taking 2 arguments, it will optimise the code, so that it’s effectively a(b, c). But in general, that’s not possible, especially in the case where the compiler determines that a is a function with a single argument (in which case, it’s return value must be another function, which is in turn called with c) or when a is a first-class function (e.g. passed as an argument)


My toy FP language did. :) It’s perfectly possible to just parse a list of arguments and figure out currying in a later compiler stage. In my experience it even somewhat helps with producing nicer arity-specific error messages, which user might appreciate.


OCaml and Haskell parse `a b c d e f` as `((((a b) c) d) e) f`.


And while different than Algol-descended languages, I don't think that's particularly confusing. (Not that you were saying so, just continuing the conversation.) You can put together a confusing expression with it, but I can put together confusing things with the Algol syntax with not too much effort too. I've got the source code with my name on the blame to prove it.


I thought more languages did this but at least nix and ocaml do not actually behave like I thought.

In Ruby however it is a bit more ugly

  def f x
    x + 1
  end

  puts f f 1
> 3


I don't understand your objection, what output would you like to see instead?


GP's point is that while yes, we know since `f` has arity 1 there's no ambiguity, in general you might not have the arity of any given function fresh in your head, and therefore can't tell (in Ruby) just from looking at `f f 1` whether it means a single invocation of an arity 2 function, or two invocations of an arity 1 function


Ah! Right. It helps to have all functions be of arity 1 to disambiguate, yes.


I really believe this is the only way you can write good software. It's mind blowing to me that most people try to "figure out" the problems before seeing what it actually is. I always feel strange when someone is writing a "study" by reading the documentation of some API or even worse, by reading someone's feature request. The writer of a feature request usually knows even less about the whole thing and really did not think things through. We need tools and programming languages where you can create really dirty but working solutions and make it iteratively better. You need to find all the edge cases and pitfalls and for that you need to fail. Of course you need to fail fast, this method does not work if the iteration is slow and the thing is in production when you find out that it barely works.


Where do I go to get paid to actually put in the TIME and effort required to produce something elegant and high quality?


As a contractor working on project basis, who can leverage this technique to produce code of such high quality that a commensurate price can be commanded.

You’d need to find a niche that values this level of quality, but if that exists, and if the theory is true, then there’s your gold.


Love to see that magical place.

I come to think that the whole concept of MVP/prototype became a bane to software development when the management/business side got aware of its existence. Architecture and design sessions can be skipped because we just build a prototype.

I have yet to see a prototype that did not end in production. It's "good enough software", let's move to the next feature.

When you are really lucky you can revisit your prototype a year or two later and try to improve it's design now that you got some data on its actual usage, but you have to figure out again, what the heck you actually did...


if you don't have memory issues in your code then either it's so small that everyone participating in its development can actually keep it in a perfect state or no one searched hard enough yet. After a certain complexity bugs just appear, you don't have to do anything, they are just there.


I’m not saying I never wrote bugs, but tools like SAL or RAII types that manage sharing help. By the time I checked in, they were either squashed or not found until after I left Microsoft.

I built a domain agnostic virtualization and streaming platform that generalized the “Click2Run” features of Office and could apply retroactively to any traditional app that didn’t include kernel code (e.g. worked on Adobe Creative Suite)

I also worked on the early phases of the sandbox that runs WinRT for JS, which also required paying my dues and fixing bugs or implementing new features (e.g. the fetch API) for the Trident engine in IE.

I wouldn’t consider these trivial projects.


Can you tell anything about that simulator? it sounds wild:)


I'm interested in your language ideas. Could you please share them with as much detail as you can/want?


Well, I've stayed up late one night already trying to get this crap out of my head. As anticipated, it has only made it worse in the short term, but I expect once I'm "done" it'll go away.

It'll show up at jerf.org/iri, but I warn you that A: it may be a while yet and B: I'm not 100% sure it'll ever be publishable, but we'll see.


I think the idea that the program can load files as code is insane in itself. There shouldn't be such a mechanism in an ideal world. The operating system would decide what you can actually load and load it for you.


How would you prevent that? If programs can load files into memory and execute data loaded into memory as code, they can load files as code. The former is necessary for obvious reasons, the latter for JIT.

It's also pointless. If a program can cause damage by loading harmful code it can also cause damage directly.

Restricting what programs can do is a great way to prevent experimentation and hinder progress.


You can prevent that by using the NX or the XD bit. Its a CPU feature and I believe the support was added over 15 years ago in most popular OSs. Here's the commit for Linux https://git.kernel.org/pub/scm/linux/kernel/git/history/hist...

>It's also pointless. If a program can cause damage by loading harmful code it can also cause damage directly.

It is not pointless, but it is also not perfect. That's why we have defense in depth. Where instead of having one perfect moat to protect the castle, you also have alligators and witches that turn people into frogs. :P


The OS can prevent it, but can it do so without making JIT impossible?


well, if you can’t make pages executable you can “just in time” it by interpreting and writing it to an optimized interpreter format i suppose (but it will be much slower)... as an example, see WKWebView vs UIWebView (from iOS)


You can do that but slow JIT kind of misses the point.


>I think the idea that the program can load files as code is insane in itself. There shouldn't be such a mechanism in an ideal world.

What do you mean by loading files "as code". Do you mean setting the execute bit on memory pages? You need to have the appropriate permissions to do that, and a locked down system wide policy can prevent programs from doing that as well.


I believe you are mixing things up. A chaotic system IS deterministic, it is just really hard to predict what it will do. The fact that we (or anyone) cannot predict what will happen in our world says nothing about it being deterministic. If you are talking about practical determinism only then yes, you are most likely correct and we'll never be able to predict the future. But it is misleading and incorrect to say that the world is not deterministic just because we cannot predict its behavior.


I just outlined the argument where I show exactly how not being able to predict it means in effect that it is not deterministic. Can you provide some logic against that argument? (So far you just said you don't agree.)

Also please note, and I repeat myself that I argue there is a great difference between "really hard to predict" and "physically impossible to predict for any being, based on our own notion of the universe".


I am not sure what kind of argument you would like, you are trying to redefine the word 'deterministic'. My argument is that it does not work like that. Just because we cannot predict something does NOT mean that it is not deterministic. Even if it is physically impossible to predict it for anyone it still can be deterministic. That's just how this concept works. In practice this means nothing, so I think we can end this debate because you are surely not convincing me and I doubt I can convince you:)


I am interested, what is the definition of determinism that you are using?

If it "means nothing in practice", and can be true/not true regardless of whether it is possible (even in theory) to test it - then I assume that it is unrelated to science and you are using it in a philosophical sense?


> Can you provide some logic against that argument?

Turing machines are deterministic. Enumerating all Turing machines is deterministic. Whether any given Turing machine will terminate is unpredictable (the Halting problem).

Unpredictability does not entail nondeterminism, although distinguishing the two is not necessarily always possible.


But turing machines are theoretical concepts. The physical processors we have - are only physical approximations of a theoretical concepts - and if the argument was made for the view of "determinism" that I outlined - it would be about the physical processor, the physical world itself.

Paraphrasing, the theoretical image of an atom, as well as the set of atoms and other particles - is perfectly deterministic. But the chaos theory talks about the real world, not the theoretical framework.


The concept of determinism has no physical limits. Don't try to redefine standard terminology.


The concept has no limits, true. The physical world does.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: