<?xml version="1.0" encoding='utf-8'?>
<!-- 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153
-->
<?xml-stylesheet type="text/xsl" href="https://mbien.dev/roller-ui/styles/atom.xsl" media="screen"?><feed xmlns="http://www.w3.org/2005/Atom">
    <title type="html">Michael Bien&apos;s Weblog</title>
    <subtitle type="html">don&apos;t panic</subtitle>
    <id>https://mbien.dev/blog/feed/entries/atom</id>
        <link rel="self" type="application/atom+xml" href="https://mbien.dev/blog/feed/entries/atom?tags=gc" />
    <link rel="alternate" type="text/html" href="https://mbien.dev/blog/" />
    <updated>2024-08-24T07:57:58+00:00</updated>
    <generator uri="http://roller.apache.org" version="6.1.4">Apache Roller</generator>
    <entry>
        <id>https://mbien.dev/blog/entry/object_pooling_determinism_vs_throughput</id>
        <title type="html">Object Pooling - Determinism vs. Throughput</title>
        <author><name>mbien</name></author>
        <link rel="alternate" type="text/html" href="https://mbien.dev/blog/entry/object_pooling_determinism_vs_throughput"/>
        <published>2009-08-06T00:08:40+00:00</published>
        <updated>2020-05-28T03:07:47+00:00</updated> 
        <category term="Java" label="Java" />
        <category term="3d" scheme="http://roller.apache.org/ns/tags/" />
        <category term="gc" scheme="http://roller.apache.org/ns/tags/" />
        <category term="java" scheme="http://roller.apache.org/ns/tags/" />
        <category term="performance" scheme="http://roller.apache.org/ns/tags/" />
        <category term="pooling" scheme="http://roller.apache.org/ns/tags/" />
        <category term="realtime" scheme="http://roller.apache.org/ns/tags/" />
        <content type="html">&lt;p&gt;
Object pooling in java is often seen as an anti pattern and/or wasted effort - but there are still valid reasons to think about pooling for certain kind of applications.&lt;/p&gt;&lt;p&gt; The JVM allocates objects much faster from managed heap (young generation; contiguous and defragmented) as you could ever recycle objects from a self written pool running on top of a VM. A good configured garbage collector is also able to delete unused objects fast. GCs in fact don&apos;t delete objects explicitly, they rather evacuate all surviving objects and sweep whole memory regions in a very efficient manner and only when its necessary to reduce runtime overhead.&lt;/p&gt;&lt;p&gt;Object allocation (of small objects) on modern JVMs is even so fast that making a copy of immutable objects sometimes outperforms modification of mutable (and often old) objects. JVM languages like scala or clojure make heavy use of this observation. One of the reasons for that anomaly is that generational JVMs are designed to be able to deal with loads of short living objects which makes them inexpensive compared to long living objects in old generations.
&lt;/p&gt;

&lt;h3&gt;Performance does not always mean Throughput&lt;br /&gt;&lt;/h3&gt;
&lt;p&gt;Rendering a game with 60fps might be optimal throughput for a renderer but the performance might be still unacceptable when all frames are rendered in the first half of the second with the second half spent on GC ;). Even if Object Pools may not increase system throughput they can still increase determinism of your application. Here are some observations and tips which might help: &lt;br /&gt;
&lt;/p&gt;

&lt;h3&gt;When should I consider Object Pools?&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;GC tuning did not help - you want to try something else&lt;/li&gt;
&lt;li&gt;The application creates a lot of objects which die in the old generation&lt;/li&gt;
&lt;li&gt;Your Objects are expensive to create but easy to recycle&lt;/li&gt;
&lt;li&gt;Determinism, e.g response time (soft real time requirements) is more important for you than throughput&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;Pro Pooling:&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;pools reduce GC activity in peak times (worst case scenarios)&lt;/li&gt;
&lt;li&gt;are easy to implement and test (its basically an array ;))&lt;/li&gt;
&lt;li&gt;are easy to disable (inject a fake pool which returns only new Objects)&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Con Pooling:&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;more (old) objects are referenced when a GC kicks in (increases gc overhead)&lt;/li&gt;
&lt;li&gt;memory leaks (don&apos;t forget to reclaim your objects!)&lt;/li&gt;
&lt;li&gt;cause additional problems in a multi-threaded scenario (new Object() is thread safe!)&lt;/li&gt;
&lt;li&gt;may decrease throughput&lt;/li&gt;
&lt;li&gt;cumbersome, repetitive client code&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;
When you decided to use pools you have to make sure to reclaim all objects as soon they are no longer used. One way of doing this is by applying the &lt;i&gt;static factory method&lt;/i&gt; pattern for object allocation and a per object dispose method for deallocation.
&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;
/**not Thread safe!**/
public class Vector3f {
    
    private static final ObjectPool&amp;lt;Vector3f&amp;gt; pool;
    public float x, y, z;
    private boolean disposed;
    
    static{
        pool = new ObjectPool&amp;lt;Vector3f&amp;gt;(1024);
        for(int i = 0; i  &amp;lt; 1024; i++) {
            pool.reclaim(new Vector3f());
        }
    }

    private Vector3f() {}

    public static Vector3f create(float x, float y, float z) {
        Vector v = pool.isEmpty() ? new Vector() : pool.get();
        v.x = x;
        v.y = y;
        v.z = z;
        v.disposed = false;
        return v;
    }
    
    public void dispose() {
        if(!disposed) {
            disposed = true;
            pool.reclaim(this);
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;
To demonstrate the perceived performance difference I captured two flyovers of my old &lt;a href=&quot;//mbien.dev/blog/entry/pictures_of_my_old_3d&quot;&gt;3d engine&lt;/a&gt;. The second flyover was captured with disabled object pools. The terrain engine triangulates the ground dependent on the position and view direction of the observer which makes object allocation hard to predict. The triangulation runs in parallel to the rendering thread which made the pool implementations a bit more complex as the example above.
&lt;/p&gt;
&lt;h3&gt;Every vertex, normal, triangle and quad-tree node is a pooled object (wireframe on mouse over)&lt;/h3&gt;
&lt;img src=&quot;https://lh5.googleusercontent.com/-3pDGd9j__kg/TiM0QlGr2SI/AAAAAAAAAEc/FCROwEpzwzc/Screenshot-18_small_sol.png&quot; name=&quot;img1&quot; onmouseover=&quot;document.img1.src=&apos;https://lh6.googleusercontent.com/-AxEFkEPOXGM/TiM0QjYAf9I/AAAAAAAAAEg/phMo4rXvTrk/Screenshot-19_small_wire.png&apos;&quot;
onmouseout=&quot;document.img1.src=&apos;https://lh5.googleusercontent.com/-3pDGd9j__kg/TiM0QlGr2SI/AAAAAAAAAEc/FCROwEpzwzc/Screenshot-18_small_sol.png&apos;&quot;/&gt;

&lt;h3&gt;on the left: flyover with pre allocated object pools; right: dynamic object allocation (new Object())&lt;/h3&gt;
&lt;video width=&quot;256&quot; height=&quot;192&quot; autobuffer=&quot;true&quot; controls=&quot;true&quot;&gt;
	&lt;source src=&quot;https://people.fh-landshut.de/~mbien/weblog/obj_pools/pooled_small.ogg&quot; type=&quot;video/ogg&quot;&gt;
        Your browser does not support the HTML5 video tag
&lt;/video&gt;
&lt;video width=&quot;256&quot; height=&quot;192&quot; autobuffer=&quot;true&quot; controls=&quot;true&quot;&gt;
	&lt;source src=&quot;https://people.fh-landshut.de/~mbien/weblog/obj_pools/dynamic_small.ogg&quot; type=&quot;video/ogg&quot;&gt;
        Your browser does not support the HTML5 video tag
&lt;/video&gt;
&lt;p&gt;
Notice the pauses at 7, 17 and 26s on the flyover with disabled pools (right video).
&lt;/p&gt;
&lt;p&gt;
&lt;i&gt;Note on the videos: The quality is very bad since the tool I used created 700MB large files for the 30s videos a lot of frames got skipped. I even sampled them down from 1600x1200 to 1024x768 and limited the fps to 30 but the bottleneck was still the hard disk. This is the main reason why even the left video does not look smooth. (I even had to boot windows the first time in 2 years to use the tool!). I&apos;ll try to capture better vids next time.&lt;/i&gt;
&lt;/p&gt;
&lt;h3&gt;Conclusion&lt;/h3&gt;
&lt;p&gt;
Using pools requires discipline, is error prone, not good for system throughput and does not play very well with threads. However there are some attempts to make them more usable in case you think you need them. The physics engine &lt;a href=&quot;http://jbullet.advel.cz/&quot;&gt;JBullet&lt;/a&gt; for example uses JStackAlloc to prevent repetitive and cumbersome code by using automatic bytecode instrumentation in the build process. Type Annotations (JSR 308 targeted for &lt;a href=&quot;http://openjdk.java.net/projects/jdk7/features/#f619&quot;&gt;OpenJDK 7&lt;/a&gt;) in combination with project &lt;a href=&quot;https://projectlombok.org/&quot;&gt;lombok&lt;/a&gt; and/or the automatic resource management proposal might provide further possibilities for simplifying the usage of object pools in java and reduce the risk for memory leaks.
&lt;/p&gt;</content>
    </entry>
    <entry>
        <id>https://mbien.dev/blog/entry/garbage_first_the_new_concurrent</id>
        <title type="html">Garbage First - It has never been so exciting to collect garbage :)</title>
        <author><name>mbien</name></author>
        <link rel="alternate" type="text/html" href="https://mbien.dev/blog/entry/garbage_first_the_new_concurrent"/>
        <published>2008-02-06T17:44:19+00:00</published>
        <updated>2020-01-31T02:13:10+00:00</updated> 
        <category term="Java" label="Java" />
        <category term="gc" scheme="http://roller.apache.org/ns/tags/" />
        <category term="java" scheme="http://roller.apache.org/ns/tags/" />
        <summary type="html">&lt;p align=&quot;justify&quot;&gt;If you are reading this entry, you probably already know about G1 the new &lt;a href=&quot;http://research.sun.com/jtech/pubs/04-g1-paper-ismm.pdf&quot;&gt;Garbage First&lt;/a&gt; concurrent collector currently in development for Java 7.&lt;/p&gt;&lt;p align=&quot;justify&quot;&gt;&lt;a href=&quot;http://blogs.sun.com/jonthecollector/&quot;&gt;Jon Masamitsu&lt;/a&gt; made recently a great &lt;a href=&quot;http://blogs.sun.com/jonthecollector/entry/our_collectors&quot;&gt;overview of all GCs&lt;/a&gt; currently integrated into JVM of Java SE 6 and announces the new G1 collector on his weblog.&lt;/p&gt;&lt;p align=&quot;justify&quot;&gt;I asked him in the comments some questions about G1 and a very interesting discussion starts. &lt;a href=&quot;http://blogs.sun.com/tony/&quot;&gt;Tony Printezis&lt;/a&gt; an expert from the HotSpot GC Group joined the discussion and answered all the questions very detailed. &lt;/p&gt;&lt;p align=&quot;justify&quot;&gt;(I have aggregated the discussion here because I think it is much easier to read if the answer follows next to the question without the noise between them)  &lt;/p&gt;&lt;div align=&quot;justify&quot;&gt;&lt;b&gt;me:&lt;/b&gt; I just recently thought about stack allocation for special kind of objects. Couldn&apos;t the hotspot compiler provide enough information to determine points in code when its safe to delete certain objects? For example many methods use temporary objects. Is it really worth to put them into the young generation? &lt;br /&gt;

&lt;/div&gt;&lt;p align=&quot;justify&quot;&gt;&lt;b&gt;Tony: &lt;/b&gt;&lt;i&gt;Regarding stack allocation. I believe (and I&apos;ve seen data on papers that support this) that stack allocation can pay off for GCs that (a) do not compact or (b) are not generational (or both, of course).&lt;br /&gt;&lt;br /&gt;In the case of (a), a non-compacting GC has an inherently slower allocation mechanism (e.g., free-list look-ups) than a compacting GC (e.g., &amp;quot;bump-the-pointer&amp;quot;). So, stack allocation can allow some objects to be allocated and reclaimed more cheaply (and, maybe, reduce fragmentation given that you cut down on the number of objects allocated / de-allocated from the free lists).&lt;br /&gt;&lt;br /&gt;In the case of (b), typically objects that are stack allocated would also be short-lived (not always, but I&apos;d guess this holds for the majority). So, effectively, you add the equivalent of a young generation to a non-generational GC.&lt;br /&gt;&lt;br /&gt;For generational GCs, results show that stack allocation might not pay off that much, given that compaction (I assume that most generational GCs would compact the young generation through copying) allows generational GCs to allocate and reclaim short-lived objects very cheaply. And, given that escape analysis (which is the mechanism that statically discovers which objects do not &amp;quot;escape&amp;quot; a thread and hence can be safely stack allocated as no other thread will access them) might only prove that a small proportion of objects allocated by the application can be safely stack allocated (so, the benefit would be quite small overall).&lt;br /&gt;&lt;br /&gt;(BTW, your 3D engine in Java shots on your blog look really cool!)&lt;br /&gt;&lt;/i&gt;&lt;br /&gt;thank you! :)&lt;br /&gt;&lt;/p&gt;</summary>
        <content type="html">&lt;p align=&quot;justify&quot;&gt;If you are reading this entry, you probably already know about G1 the new &lt;a href=&quot;http://research.sun.com/jtech/pubs/04-g1-paper-ismm.pdf&quot;&gt;Garbage First&lt;/a&gt; concurrent collector currently in development for Java 7.&lt;/p&gt;&lt;p align=&quot;justify&quot;&gt;&lt;a href=&quot;http://blogs.sun.com/jonthecollector/&quot;&gt;Jon Masamitsu&lt;/a&gt; made recently a great &lt;a href=&quot;http://blogs.sun.com/jonthecollector/entry/our_collectors&quot;&gt;overview of all GCs&lt;/a&gt; currently integrated into JVM of Java SE 6 and announces the new G1 collector on his weblog.&lt;/p&gt;&lt;p align=&quot;justify&quot;&gt;I asked him in the comments some questions about G1 and a very interesting discussion starts. &lt;a href=&quot;http://blogs.sun.com/tony/&quot;&gt;Tony Printezis&lt;/a&gt; an expert from the HotSpot GC Group joined the discussion and answered all the questions very detailed. &lt;/p&gt;&lt;p align=&quot;justify&quot;&gt;(I have aggregated the discussion here because I think it is much easier to read if the answer follows next to the question without the noise between them)&lt;br /&gt;&amp;nbsp; &lt;br /&gt;&lt;/p&gt;&lt;hr align=&quot;justify&quot; width=&quot;100%&quot; size=&quot;2&quot; /&gt;&lt;div align=&quot;justify&quot;&gt;&lt;b&gt;me:&lt;/b&gt; You mentioned &amp;quot;Parallelism and concurrency in collections&amp;quot; in the featurelist of G1, is it already clear when a collections could be run concurrently and when would a full stop accrue?&lt;/div&gt;&lt;p align=&quot;justify&quot;&gt;&lt;b&gt;Tony:&lt;/b&gt;&lt;i&gt; Inititally, G1 will behave similarly to CMS, i.e., stop-the-world &amp;quot;young GCs&amp;quot; (with every now and then some old regions also being reclaimed during such GCs) and concurrent marking (but no sweeping, as it&apos;s not needed). But, with several advantages (compaction, better predictability, faster remarks, etc.). We have many ideas on how to proceed in the future to do even more work concurrently, but nothing is certain yet. so we will not say much else on this at this time.&lt;/i&gt;&lt;/p&gt;&lt;hr align=&quot;justify&quot; width=&quot;100%&quot; size=&quot;2&quot; /&gt;&lt;div align=&quot;justify&quot;&gt;&lt;b&gt;me:&lt;/b&gt; I just recently thought about stack allocation for special kind of objects. Couldn&apos;t the hotspot compiler provide enough information to determine points in code when its safe to delete certain objects? For example many methods use temporary objects. Is it really worth to put them into the young generation? &lt;br /&gt;

&lt;/div&gt;&lt;p align=&quot;justify&quot;&gt;&lt;b&gt;Tony: &lt;/b&gt;&lt;i&gt;Regarding stack allocation. I believe (and I&apos;ve seen data on papers that support this) that stack allocation can pay off for GCs that (a) do not compact or (b) are not generational (or both, of course).&lt;br /&gt;&lt;br /&gt;In the case of (a), a non-compacting GC has an inherently slower allocation mechanism (e.g., free-list look-ups) than a compacting GC (e.g., &amp;quot;bump-the-pointer&amp;quot;). So, stack allocation can allow some objects to be allocated and reclaimed more cheaply (and, maybe, reduce fragmentation given that you cut down on the number of objects allocated / de-allocated from the free lists).&lt;br /&gt;&lt;br /&gt;In the case of (b), typically objects that are stack allocated would also be short-lived (not always, but I&apos;d guess this holds for the majority). So, effectively, you add the equivalent of a young generation to a non-generational GC.&lt;br /&gt;&lt;br /&gt;For generational GCs, results show that stack allocation might not pay off that much, given that compaction (I assume that most generational GCs would compact the young generation through copying) allows generational GCs to allocate and reclaim short-lived objects very cheaply. And, given that escape analysis (which is the mechanism that statically discovers which objects do not &amp;quot;escape&amp;quot; a thread and hence can be safely stack allocated as no other thread will access them) might only prove that a small proportion of objects allocated by the application can be safely stack allocated (so, the benefit would be quite small overall).&lt;br /&gt;&lt;br /&gt;(BTW, your 3D engine in Java shots on your blog look really cool!)&lt;br /&gt;&lt;/i&gt;&lt;br /&gt;[ thank you! :) ]&lt;/p&gt;&lt;hr align=&quot;justify&quot; width=&quot;100%&quot; size=&quot;2&quot; /&gt;&lt;p align=&quot;justify&quot;&gt;&lt;b&gt;Andrew:&lt;/b&gt; Is it worthwhile to try and collect &apos;cheap&apos; garbage? Or does the cost of tracking it outweigh the benefits? One thought was having a bit on each object that indicated whether it had ever been assigned to a non-stack location, coupled with a list on each stack frame for objects that had been allocated. Then, when unwinding that stack frame you could immediately GC any object that wasn&apos;t potentially referenced from elsewhere.&lt;br /&gt;(Note: the flag would just be turned on, no attempt would be made to reference count, etc)&lt;/p&gt;&lt;p align=&quot;justify&quot;&gt;&lt;b&gt;Tony:&lt;/b&gt; &lt;i&gt;In practice doing what you&apos;re proposing is not really straightforward (even though it sounds good &amp;quot;on paper&amp;quot;!).&lt;/i&gt;&lt;/p&gt;&lt;div align=&quot;justify&quot;&gt;


&lt;/div&gt;&lt;p align=&quot;justify&quot;&gt;&lt;i&gt;The main issue is that for GCs that rely on compaction (or that at least have a copying young generation, which is basically all the GCs in HotSpot), GCing specific objects is just not possible (or at least, it&apos;s not very efficient). Compacting GCs assume that, when a GC happens, all live objects will move somewhere (to another space in copying GCs, or to the bottom of the compacting space in sliding compacting GCs) and all available free space will be in one place. This means that such GCs do not keep track of individual free chunks and makes it impossible to just reclaim specific objects. And there are several good reasons why we like such collectors, aaone of the most important ones being that they allow for very fast, very scalable bump-the-pointer allocation.&lt;/i&gt;&lt;/p&gt;&lt;div align=&quot;justify&quot;&gt;

&lt;/div&gt;&lt;p align=&quot;justify&quot;&gt;&lt;i&gt;Even if we could GC specific objects, how are we going to find all the objects allocated by a particular stack frame? Are we going to link them at allocation? That&apos;s extra overhead.&lt;/i&gt;&lt;/p&gt;&lt;div align=&quot;justify&quot;&gt;

&lt;/div&gt;&lt;p align=&quot;justify&quot;&gt;&lt;i&gt;Performance-wise, copying young generations (like the ones we have in HotSpot) are super efficient in reclaiming young, short-lived objects (they just evacuate the few survivors they come across and never even touch the dead objects; this is why they are so fast). So, in most cases, they should be able to reclaim space at least as efficiently as what you propose. In fact, they might be even more efficient, given that they don&apos;t have to iterate over the dead objects: they copy the survivors, the rest are reclaimed, done. Whereas, according to what you propose, we would have to iterate over the dead objects and de-allocate them one-by-one.&lt;br /&gt;&lt;br /&gt;To summarize, your scheme might work for a non-generational, non-compacting GC (where you can de-allocate specific objects). But, I can&apos;t see it working for our GCs.&lt;br /&gt;&lt;br /&gt;I got slightly carried away in my reply here... Hope it helps!&lt;/i&gt;&lt;/p&gt;&lt;hr align=&quot;justify&quot; width=&quot;100%&quot; size=&quot;2&quot; /&gt;&lt;p align=&quot;justify&quot;&gt;&lt;b&gt;Aaron:&lt;/b&gt; Will the new GC also collect the non-heap (i.e. Code Cache and Perm Gen)? Or will you get rid of those two?&lt;/p&gt;&lt;p align=&quot;justify&quot;&gt;&lt;b&gt;Tony: &lt;/b&gt;&lt;i&gt;Right now the G1 heap replaces the young / old generations. I.e., we still have a permanent space + code cache. In the future, we might be able to incorporate the permanent space into G1 heap (there are many tricky issues that we need to resolve first to do that...). However, I don&apos;t think we&apos;ll also incorporate the code cache too.&lt;/i&gt;&lt;/p&gt;&lt;hr align=&quot;justify&quot; width=&quot;100%&quot; size=&quot;2&quot; /&gt;&lt;p align=&quot;justify&quot;&gt;&lt;b&gt;Adam:&lt;/b&gt; How large is a region likely to be?&lt;/p&gt;&lt;p align=&quot;justify&quot;&gt;&lt;b&gt;Tony: &lt;/b&gt;&lt;i&gt;Right now, regions are 1MB. We allocate a contiguous block of regions for objects that are &amp;quot;humongous&amp;quot;, i.e. that are too large to fit in one region.&lt;/i&gt;&lt;/p&gt;&lt;hr align=&quot;justify&quot; width=&quot;100%&quot; size=&quot;2&quot; /&gt;&lt;p align=&quot;justify&quot;&gt;&lt;b&gt;Adam:&lt;/b&gt; Will TLAB&apos;s be (partially) replaced by regions, so threads may be allocating into different regions?&lt;/p&gt;&lt;p align=&quot;justify&quot;&gt;&lt;b&gt;Tony: &lt;/b&gt;&lt;i&gt;No, regions will not replace TLABs. There&apos;s one allocating region and threads will allocate TLABs from it. When that region is full, then it will be &amp;quot;retired&amp;quot; and another one will become the allocating region. So, a single region might hold TLABs from several threads.&lt;/i&gt;&lt;/p&gt;&lt;hr align=&quot;justify&quot; width=&quot;100%&quot; size=&quot;2&quot; /&gt;&lt;div align=&quot;justify&quot;&gt;&lt;b&gt;Adam:&lt;/b&gt; It&apos;s not obvious to me how to choose which regions have more garbage, without first having marked them, but then how do you know which region to allocate into?&lt;/div&gt;&lt;p align=&quot;justify&quot;&gt;&lt;b&gt;Tony: &lt;/b&gt;&lt;i&gt;As I mentioned in an earlier post, we perform a marking phase every now and then to get up-to-date liveness information. &lt;/i&gt;&lt;/p&gt;&lt;hr align=&quot;justify&quot; width=&quot;100%&quot; size=&quot;2&quot; /&gt;&lt;div align=&quot;justify&quot;&gt;&lt;b&gt;Adam:&lt;/b&gt; When doing a young gen collection, will you compact into a different region or the same region? I.e., does it work more like a copying collector than a mark-sweep-compact? (Or am I missing something there :)&lt;/div&gt;&lt;p align=&quot;justify&quot;&gt;&lt;b&gt;Tony: &lt;/b&gt;&lt;i&gt;(you&apos;re not missing anything! good question) Collections are done by copying. Basically, we pick the regions we want to GC (we refer to that set of regions as the &amp;quot;collection set&amp;quot;) and we evacuate the surviving objects from those regions to another set (the &amp;quot;to-space&amp;quot;). The assumption is that to-space will have fewer regions than the collection set and this is how we reclaim space. Given that we assume that the survival rate in the collection set will be quite low (we chose which regions to GC, remember?), copying is the most efficient way to perform such collections.&lt;/i&gt;&lt;/p&gt;&lt;hr align=&quot;justify&quot; width=&quot;100%&quot; size=&quot;2&quot; /&gt;</content>
    </entry>
</feed>

