Reference counting does not trace dead objects. Most of its activity is concerned with maintaining reference counts on live objects.
When a reference count hits zero, that's when refcounting begins to be concerned with a dead object; and that part of its operation corresponds to the sweep activity in garbage collection, not to the tracing of live objects.
It is not a dual to garbage collection concerned with its negative spaces; it's simply a less general (we could legitimately say lesser) garbage collection that doesn't deal with cycles on its own.
> it's simply a less general (we could legitimately say lesser) garbage collection that doesn't deal with cycles on its own.
There are different implementation and performance trade-offs associated with both. I’ll focus on the two that are most meaningful to me.
Reference counting can be added as a library to languages that don’t want or can’t have a precise garbage collector. If you work in C++ (or Rust), it’s a very viable way to assure that you have some measure of non-manual clean up while maintaining precise resource control.
Similarly, when performance matters reference counting is essentially deterministic much easier to understand and model.
In a lot of situations, garbage collection is an overall better strategy, but it’s not a strict superset, and not always the right choice.
Reference counting does trace dead objects. When the reference count hits zero, you have to recursively trace through all objects referenced by the newly dead object. That’s a trace of dead objects.
That can be identified as a finalization-driven sweep. The object whose refcount hits zero is finalized, and the routine for that drops its references to other objects.
Garbage collection also traces dead objects. Or at least some kinds of GC implementations that are not copying. when the marking is done, the heaps are traversed again to identify dead objects, which are put onto a free list. That's a trace of dead objects. (Under copying collection, that is implicit; live objects are moved to a new heap and the vacated space is entirely made available for bump allocation.)
Unfortunately, the symmetry provided by higher levels of your software stack is not always reflected by the lower levels. This causes your symmetry to be created as an abstraction. (Subroutines are more efficient to implement than co-routines on real hardware.) Abstractions are leaky and this, in turn, causes the superficial symmetry to merely mask the underlying asymmetry.
If done exceptionally well, the only visible evidence of the leak will be substantial performance disparities between seemingly symmetric features (well done! this is unusual).
If done with the normal level of software design quality, the evidence will show up as quirky behavior, imperfect symmetry, "well this one always works but the other one is not reentrant", etc.
Yes! Subroutine call is a) allocation of activation record b) switching context c) returning that combines de-alloc and switch.
while coroutines have all of these concepts separated. Why not start with a powerful and general concept and optimize for that one?
> Why not start with a powerful and general concept and optimize for that one?
As with basically everything, there are tradeoffs involved. Sometimes restrictions can be helpful for keeping things understandable, which can in turn make optimizations easier to implement. As a rather hamfisted example: completely unrestricted goto. Very general, debatably powerful, but relatively easy to use in a way that makes comprehension difficult. That same generality can also make it difficult to verify that optimizations don't change observable program semantics compared to something more restricted.
"This was the answer I needed! Rather than visiting all the live objects, I wanted to only visit the dead ones, and reference counting would let me do that.
So I added a way of maintaining a reference count for all the nodes in the doc. When we produce a new document, we decrement the reference count of the old root node (it will always be 0 afterwards). So we recursively decrement the ref count of its children, and so on. This gives me exactly what I wanted — a way to find all the nodes that were not reused, without having to visit most of the nodes in the doc."
I think there's a bit missing from the description here in the and so on, you would only recurse on a node when it's new refcount is zero, right (and the set of zero refcount nodes produced is exactly the set of dead nodes)?
Isn't this sort of just like having a dirty flag on nodes, and then replacing dirty nodes?
I’ve often noted that most projects of a certain size tend to implement some form of garbage collection and allocation.
Perhaps general purpose systems of these sorts aren’t suitable for specialized applications… but I don’t get the “hate” (if you can call it that) which some programmers have for GC.
GCs have a lot of tradeoffs involved. It's impossible to check all boxes and that means that there's going to be something to gripe about.
If you want your GC to be memory efficient you are likely trading off throughput.
If you want your GC to allocate fast and avoid memory fragmentation, you are likely over-provisioning the heap.
If you want to minimize CPU time in GC, you'll likely increase pause time.
If you want to minimize pause time, you'll likely increase CPU time doing a GC.
All these things can make someone ultimately hate a GC.
However, if you want a programming language which deals with complicated memory lifetime (think concurrent datastructures) then a GC is practically paramount. It's a lot harder to correctly implement something like Java's "ConcurrentHashMap" in C++ or Rust.
Is it possible that by knowing less about garbage collection in Java this person might have arrived at the same solution earlier? After all his initial construction of a tracing garbage collector was wasted effort.
(OP here) It’s possible, but I doubt it. Perhaps the way I wrote it makes it sound like I was thinking about it as a GC problem from the beginning, but I wasn’t. It wasn’t until I started seeing it as as being like GC that (a) I realized that my naive solution was akin to tracing GC, and (b) I came up with the reference counting solution.
If Grug worry about garbage collector, it mean Grug working on problem already solved by Sun Microsystems instead of problem Grug paid to solve.
Except, if someone want to pay Grug to work on garbage collector for javascript framework, Grug put in position where Grug learns what Grug don't already know about it because it now Grug's job. So Grug understand why Sun solve problem, why problem hard, tell other Grug about isomorphisms between spanning trees. Now other Grug know more about what other Grug don't know, why other Grug not make same mistake of knowing better than Sun Microsystems either.
Had similar epiphanies some many years ago (ugh, I'm old) when I was playing around writing a garbage collected persistent (in the 'stored on [spinny spinny] disk' sense not the FP sense of the word) programming language / runtime. This was back when it was roughly infeasible to be holding large "worlds" of objects purely in-memory on machines of the style of the time, so intelligently paging objects in and out was imperative. (Aside, I think with the recent doubling... tripling of RAM prices this area of thinking is now again more imperative)...
In any case, if one is doing GC in such a language, a full tracing collector (whether copying or mark & sweep) is madness, as to find live references means walking nearly the entire heap including the portions living in secondary storage, and now you're in a world of pain.
In this case, an intelligent cycle collecting garbage collector in the Bacon style was the answer. You keep in in-memory table of reference counts, and you only trace when you hit cycles. [and hopefully design your language semantics to discourage cycles]
> hopefully design your language semantics to discourage cycles
Why? Cyclical structures can be very useful. For example, it can be very handy in many situations for a contained object to have a back-pointer to its container.
[UPDATE] Two responses have now pointed out that this particular case can be handled with weak pointers. But then I can just point to a general graph as an example where that won't work.
> For example, it can be very handy in many situations for a contained object to have a back-pointer to its container.
That's not a true cycle, it's just a back link for which "weak" reference counts suffice. The containment relation implies that the container "owns" the object, so we don't need to worry about the case where the container might just go away without dropping its contents first.
(Edit: I agree that when dealing with a truly general graph some form of tracing is the best approach. These problem domains are where tracing GC really helps.)
If you put all the nodes into an array and use weakrefs (or indices) for node->node edges you move the node ownership to a single object which will make your garbage collection faster for either algorithm, and will also improve your memory locality.
"How do you apply algo X to a problem which has been narrowly-tailored and/or under-specified to specifically exclude X" isn't exactly a constructive inquiry.
No but they are under-specified. OP is specifically working with a document-hierarchy data-structure with a natural ownership/weak-pointer distinction to exploit -- no need to abstract it to a general graph.
Yes, but they said that in the context of a tailored language for persistent/HDD-backed data, where implicitly performance crosses the line into an additional measure of correctness, rather than an orthogonal one. ("to find live references means walking nearly the entire heap including the portions living in secondary storage, and now you're in a world of pain")
So the "increased cognitive overhead" is intrinsic to the problem domain, not an unforced defect of the language design. Overgeneralization in such a case would induce even worse overhead as there'd be no user-level way to fix perf.
You don't always have to walk the entire program heap to find cyclic references, only the fraction of it that may in fact be involved in a cycle. That fraction may or may not be inherently small enough, depending on the kind of problems you'll be working with.
I never got far enough to push that into a production system but I suspect it would have, yes.
I can see a periodic compacting phase could be useful in a system like that.
In the DB world there's good research around similar topics. e.g. LeanStore and Umbra -- Umbra in particular does some nice things with variable sized buffers that I believe are expected to help with fragmentation https://db.in.tum.de/~freitag/papers/p29-neumann-cidr20.pdf
Reference counting does not trace dead objects. Most of its activity is concerned with maintaining reference counts on live objects.
When a reference count hits zero, that's when refcounting begins to be concerned with a dead object; and that part of its operation corresponds to the sweep activity in garbage collection, not to the tracing of live objects.
It is not a dual to garbage collection concerned with its negative spaces; it's simply a less general (we could legitimately say lesser) garbage collection that doesn't deal with cycles on its own.
> it's simply a less general (we could legitimately say lesser) garbage collection that doesn't deal with cycles on its own.
There are different implementation and performance trade-offs associated with both. I’ll focus on the two that are most meaningful to me.
Reference counting can be added as a library to languages that don’t want or can’t have a precise garbage collector. If you work in C++ (or Rust), it’s a very viable way to assure that you have some measure of non-manual clean up while maintaining precise resource control.
Similarly, when performance matters reference counting is essentially deterministic much easier to understand and model.
In a lot of situations, garbage collection is an overall better strategy, but it’s not a strict superset, and not always the right choice.
> There are different implementation and performance trade-offs associated with both.
They are not the same because there are "semantic tradeoffs".
Reference counting does trace dead objects. When the reference count hits zero, you have to recursively trace through all objects referenced by the newly dead object. That’s a trace of dead objects.
That can be identified as a finalization-driven sweep. The object whose refcount hits zero is finalized, and the routine for that drops its references to other objects.
Garbage collection also traces dead objects. Or at least some kinds of GC implementations that are not copying. when the marking is done, the heaps are traversed again to identify dead objects, which are put onto a free list. That's a trace of dead objects. (Under copying collection, that is implicit; live objects are moved to a new heap and the vacated space is entirely made available for bump allocation.)
My favorite quote from Alan Perlis: “Symmetry is a complexity-reducing concept (co-routines include subroutines); seek it everywhere.”
Unfortunately, the symmetry provided by higher levels of your software stack is not always reflected by the lower levels. This causes your symmetry to be created as an abstraction. (Subroutines are more efficient to implement than co-routines on real hardware.) Abstractions are leaky and this, in turn, causes the superficial symmetry to merely mask the underlying asymmetry.
If done exceptionally well, the only visible evidence of the leak will be substantial performance disparities between seemingly symmetric features (well done! this is unusual).
If done with the normal level of software design quality, the evidence will show up as quirky behavior, imperfect symmetry, "well this one always works but the other one is not reentrant", etc.
Yes! Subroutine call is a) allocation of activation record b) switching context c) returning that combines de-alloc and switch. while coroutines have all of these concepts separated. Why not start with a powerful and general concept and optimize for that one?
> Why not start with a powerful and general concept and optimize for that one?
As with basically everything, there are tradeoffs involved. Sometimes restrictions can be helpful for keeping things understandable, which can in turn make optimizations easier to implement. As a rather hamfisted example: completely unrestricted goto. Very general, debatably powerful, but relatively easy to use in a way that makes comprehension difficult. That same generality can also make it difficult to verify that optimizations don't change observable program semantics compared to something more restricted.
Am I missing something?
"This was the answer I needed! Rather than visiting all the live objects, I wanted to only visit the dead ones, and reference counting would let me do that.
So I added a way of maintaining a reference count for all the nodes in the doc. When we produce a new document, we decrement the reference count of the old root node (it will always be 0 afterwards). So we recursively decrement the ref count of its children, and so on. This gives me exactly what I wanted — a way to find all the nodes that were not reused, without having to visit most of the nodes in the doc."
I think there's a bit missing from the description here in the and so on, you would only recurse on a node when it's new refcount is zero, right (and the set of zero refcount nodes produced is exactly the set of dead nodes)?
Isn't this sort of just like having a dirty flag on nodes, and then replacing dirty nodes?
Which is garbage collection?
I’ve often noted that most projects of a certain size tend to implement some form of garbage collection and allocation.
Perhaps general purpose systems of these sorts aren’t suitable for specialized applications… but I don’t get the “hate” (if you can call it that) which some programmers have for GC.
As someone that likes GCs, I understand it.
GCs have a lot of tradeoffs involved. It's impossible to check all boxes and that means that there's going to be something to gripe about.
If you want your GC to be memory efficient you are likely trading off throughput.
If you want your GC to allocate fast and avoid memory fragmentation, you are likely over-provisioning the heap.
If you want to minimize CPU time in GC, you'll likely increase pause time.
If you want to minimize pause time, you'll likely increase CPU time doing a GC.
All these things can make someone ultimately hate a GC.
However, if you want a programming language which deals with complicated memory lifetime (think concurrent datastructures) then a GC is practically paramount. It's a lot harder to correctly implement something like Java's "ConcurrentHashMap" in C++ or Rust.
Is it possible that by knowing less about garbage collection in Java this person might have arrived at the same solution earlier? After all his initial construction of a tracing garbage collector was wasted effort.
(OP here) It’s possible, but I doubt it. Perhaps the way I wrote it makes it sound like I was thinking about it as a GC problem from the beginning, but I wasn’t. It wasn’t until I started seeing it as as being like GC that (a) I realized that my naive solution was akin to tracing GC, and (b) I came up with the reference counting solution.
If Grug worry about garbage collector, it mean Grug working on problem already solved by Sun Microsystems instead of problem Grug paid to solve.
Except, if someone want to pay Grug to work on garbage collector for javascript framework, Grug put in position where Grug learns what Grug don't already know about it because it now Grug's job. So Grug understand why Sun solve problem, why problem hard, tell other Grug about isomorphisms between spanning trees. Now other Grug know more about what other Grug don't know, why other Grug not make same mistake of knowing better than Sun Microsystems either.
Had similar epiphanies some many years ago (ugh, I'm old) when I was playing around writing a garbage collected persistent (in the 'stored on [spinny spinny] disk' sense not the FP sense of the word) programming language / runtime. This was back when it was roughly infeasible to be holding large "worlds" of objects purely in-memory on machines of the style of the time, so intelligently paging objects in and out was imperative. (Aside, I think with the recent doubling... tripling of RAM prices this area of thinking is now again more imperative)...
In any case, if one is doing GC in such a language, a full tracing collector (whether copying or mark & sweep) is madness, as to find live references means walking nearly the entire heap including the portions living in secondary storage, and now you're in a world of pain.
In this case, an intelligent cycle collecting garbage collector in the Bacon style was the answer. You keep in in-memory table of reference counts, and you only trace when you hit cycles. [and hopefully design your language semantics to discourage cycles]
> you only trace when you hit cycles
How do you tell when you've hit a cycle?
> hopefully design your language semantics to discourage cycles
Why? Cyclical structures can be very useful. For example, it can be very handy in many situations for a contained object to have a back-pointer to its container.
[UPDATE] Two responses have now pointed out that this particular case can be handled with weak pointers. But then I can just point to a general graph as an example where that won't work.
> For example, it can be very handy in many situations for a contained object to have a back-pointer to its container.
That's not a true cycle, it's just a back link for which "weak" reference counts suffice. The containment relation implies that the container "owns" the object, so we don't need to worry about the case where the container might just go away without dropping its contents first.
(Edit: I agree that when dealing with a truly general graph some form of tracing is the best approach. These problem domains are where tracing GC really helps.)
OK, then I'll pick a different example: a general graph.
If you put all the nodes into an array and use weakrefs (or indices) for node->node edges you move the node ownership to a single object which will make your garbage collection faster for either algorithm, and will also improve your memory locality.
"How do you apply algo X to a problem which has been narrowly-tailored and/or under-specified to specifically exclude X" isn't exactly a constructive inquiry.
A general graph is not exactly "narrowly tailored". Graphs are pretty common.
No but they are under-specified. OP is specifically working with a document-hierarchy data-structure with a natural ownership/weak-pointer distinction to exploit -- no need to abstract it to a general graph.
Yes, but then they also said:
> hopefully design your language semantics to discourage cycles
thus expanding the scope of their comment beyond that specific use case.
Yes, but they said that in the context of a tailored language for persistent/HDD-backed data, where implicitly performance crosses the line into an additional measure of correctness, rather than an orthogonal one. ("to find live references means walking nearly the entire heap including the portions living in secondary storage, and now you're in a world of pain")
So the "increased cognitive overhead" is intrinsic to the problem domain, not an unforced defect of the language design. Overgeneralization in such a case would induce even worse overhead as there'd be no user-level way to fix perf.
You don't always have to walk the entire program heap to find cyclic references, only the fraction of it that may in fact be involved in a cycle. That fraction may or may not be inherently small enough, depending on the kind of problems you'll be working with.
Fair point.
> For example, it can be very handy in many situations for a contained object to have a back-pointer to its container.
Does it frequently need an owning reference though or would a weak reference suffice? Usually the latter situation suffices.
A fair point, but then you're still putting the burden on the programmer to figure out where a weak reference is appropriate.
But then I'll just choose a different example, like a general graph.
> How do you tell when you've hit a cycle?
https://pages.cs.wisc.edu/~cymen/misc/interests/Bacon01Concu...
TLDR there are heuristics which can give you a hint. And then you trigger a local trace to see.
> Why?
Because then you incur the cost of a trace -- and potentially paging in from slow-slow disk -- vs a simple atomic refcount.
Even just a localized trace on live objects is a pointer-chasing cache & branch prediction killer.
I’m curious - did fragmentation end up being a significant issue, whether in memory or offloaded?
I never got far enough to push that into a production system but I suspect it would have, yes.
I can see a periodic compacting phase could be useful in a system like that.
In the DB world there's good research around similar topics. e.g. LeanStore and Umbra -- Umbra in particular does some nice things with variable sized buffers that I believe are expected to help with fragmentation https://db.in.tum.de/~freitag/papers/p29-neumann-cidr20.pdf