@@ -351,7 +351,6 @@ follows these steps in order:
351
351
the reference counts fall to 0, triggering the destruction of all unreachable
352
352
objects.
353
353
354
-
355
354
Optimization: incremental collection
356
355
====================================
357
356
@@ -485,46 +484,6 @@ specifically in a generation by calling `gc.collect(generation=NUM)`.
485
484
```
486
485
487
486
488
- Optimization: visiting reachable objects
489
- ========================================
490
-
491
- An object cannot be garbage if it can be reached.
492
-
493
- To avoid having to identify reference cycles across the whole heap, we can
494
- reduce the amount of work done considerably by first moving most reachable objects
495
- to the ` visited ` space. Empirically, most reachable objects can be reached from a
496
- small set of global objects and local variables.
497
- This step does much less work per object, so reduces the time spent
498
- performing garbage collection by at least half.
499
-
500
- > [ !NOTE]
501
- > Objects that are not determined to be reachable by this pass are not necessarily
502
- > unreachable. We still need to perform the main algorithm to determine which objects
503
- > are actually unreachable.
504
-
505
- We use the same technique of forming a transitive closure as the incremental
506
- collector does to find reachable objects, seeding the list with some global
507
- objects and the currently executing frames.
508
-
509
- This phase moves objects to the ` visited ` space, as follows:
510
-
511
- 1 . All objects directly referred to by any builtin class, the ` sys ` module, the ` builtins `
512
- module and all objects directly referred to from stack frames are added to a working
513
- set of reachable objects.
514
- 2 . Until this working set is empty:
515
- 1 . Pop an object from the set and move it to the ` visited ` space
516
- 2 . For each object directly reachable from that object:
517
- * If it is not already in ` visited ` space and it is a GC object,
518
- add it to the working set
519
-
520
-
521
- Before each increment of collection is performed, the stacks are scanned
522
- to check for any new stack frames that have been created since the last
523
- increment. All objects directly referred to from those stack frames are
524
- added to the working set.
525
- Then the above algorithm is repeated, starting from step 2.
526
-
527
-
528
487
Optimization: reusing fields to save memory
529
488
===========================================
530
489
@@ -573,8 +532,8 @@ of `PyGC_Head` discussed in the `Memory layout and object structure`_ section:
573
532
currently in. Instead, when that's needed, ad hoc tricks (like the
574
533
` NEXT_MASK_UNREACHABLE ` flag) are employed.
575
534
576
- Optimization: delayed untracking of containers
577
- ==============================================
535
+ Optimization: delay tracking containers
536
+ =======================================
578
537
579
538
Certain types of containers cannot participate in a reference cycle, and so do
580
539
not need to be tracked by the garbage collector. Untracking these objects
@@ -589,8 +548,8 @@ a container:
589
548
As a general rule, instances of atomic types aren't tracked and instances of
590
549
non-atomic types (containers, user-defined objects...) are. However, some
591
550
type-specific optimizations can be present in order to suppress the garbage
592
- collector footprint of simple instances. Historically, both dictionaries and
593
- tuples were untracked during garbage collection. Now it is only tuples :
551
+ collector footprint of simple instances. Some examples of native types that
552
+ benefit from delayed tracking :
594
553
595
554
- Tuples containing only immutable objects (integers, strings etc,
596
555
and recursively, tuples of immutable objects) do not need to be tracked. The
@@ -599,8 +558,14 @@ tuples were untracked during garbage collection. Now it is only tuples:
599
558
tuples at creation time. Instead, all tuples except the empty tuple are tracked
600
559
when created. During garbage collection it is determined whether any surviving
601
560
tuples can be untracked. A tuple can be untracked if all of its contents are
602
- already not tracked. Tuples are examined for untracking when moved from the
603
- young to the old generation.
561
+ already not tracked. Tuples are examined for untracking in all garbage collection
562
+ cycles. It may take more than one cycle to untrack a tuple.
563
+
564
+ - Dictionaries containing only immutable objects also do not need to be tracked.
565
+ Dictionaries are untracked when created. If a tracked item is inserted into a
566
+ dictionary (either as a key or value), the dictionary becomes tracked. During a
567
+ full garbage collection (all generations), the collector will untrack any dictionaries
568
+ whose contents are not tracked.
604
569
605
570
The garbage collector module provides the Python function ` is_tracked(obj) ` , which returns
606
571
the current tracking status of the object. Subsequent garbage collections may change the
@@ -613,9 +578,11 @@ tracking status of the object.
613
578
False
614
579
>>> gc.is_tracked([])
615
580
True
616
- >>> gc.is_tracked(("a", 1) )
581
+ >>> gc.is_tracked({} )
617
582
False
618
583
>>> gc.is_tracked({"a": 1})
584
+ False
585
+ >>> gc.is_tracked({"a": []})
619
586
True
620
587
```
621
588
0 commit comments