Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't see how immutability makes much difference to garbage collection. Garbage collectors don't look at the value inside memory they are collecting, only whether something live is still retaining that part of memory. Whether the value is mutable or immutable has absolutely no bearing on the performance or logic of the GC. Either something is still using the memory, or it is free to collect.

Also you've just described existing GC systems - The Java G1 collector is very similar that (it's a lot more complicated though).



Generational GC usually depends on write barriers to detect creation of references to newer generations inside older generations.

Immutability prevents the mutation of older generations, so no new references can occur.

So I expect generational GC to be easier to implement. I wouldn't expect it to be faster or slower though, because programs will need to be written differently to cope with the limitations on mutability. O(1) algorithms will likely be replaced by O(log(n)).


In the case of Clojure, log is log base 32, so it's close enough to O(1) as to make no difference in most circumstances.


Many immutable algorithms replace arrays with trees. Trees are pointer-heavy, and pointers are less cache-friendly than arrays.

Memory access is often over 100x the cost of an L1 cache hit. It doesn't take too many of those to make a big difference if you're CPU bound.

My comments are general, I'm sure Clojure has access to arrays where necessary for interop at least.


Again, true, but the big problem I'm facing isn't raw performance so much as it's guaranteed latency.


But the constant factor is Z, a very large number. A number so large it has a color and name. It pays taxes, has lunch and sees movies on the weekend.


Not true in the context of dynamic languages. The constant factor for persistent maps in that context is quite small.


The actual GC of freeing memory is indeed possible (and common) without stop the world. Heap defrag is another story, though. This involves moving objects to different locations in memory, and if the memory is mutable, measures like stop the world is required since multi-byte move is not atomic.


Doesn't matter. In an immutable model, the old copy is guaranteed not to change, so you can safely copy it non-atomically. (That's one of the big benefits of immutability.)


Which makes really good incremental collection much easier to write


But not faster.


True. But what I find myself wishing for is not more throughput, but seemingly pauseless operation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: